The Use of New Technology in Qualitative Research. Introduction to Issue
Abstract: As society transforms and is transformed by new technology, so there are new ways in which qualitative researchers collect
and analyse data and new forms of data to collect. This paper sets in context the contributions in this issue of FQS
by examining these developments. The spread of video and photographic
technology means that images can be used both as sources
of data and as tools for data collection. The digital form much
audio and video data now takes makes possible new ways of
creating, processing and analyzing such data. The parallel
growth of the Internet also makes available new ways of collecting
qualitative data and new settings in which to collect it.
However, such developments raise issues about the way researchers
collect, process and publish data and how they produce high
quality analyses. Digital technology has also meant that new ways
of analyzing data through computer assisted qualitative data
analysis (CAQDAS) are now possible. There is now a range of
such software and, in response to demand, developers are still
adding new features and functions that researchers need to
understand. The diversity of software means that there is a
need for standards for storing and exchanging qualitative data
and analyses. Nevertheless, there is still much debate about
the degree to which CAQDAS can itself produce qualitative analysis
or merely assist with its development by human researchers. At
the same time there is now evidence of analytic developments
made possible by the use of new technology.
Key words: CAQDAS, computer assisted qualitative data analysis, images, video, digital data, validity, reliability, coding, code and
retrieve, XML, qualitative models
Table of Contents
1. Introduction
2. Data Gathering
3. Computer Assisted Qualitative Data Analysis (CAQDAS)
4. Scepticism About the Use of CAQDAS
5. The Quality of Qualitative Research
6. The Future
Perhaps the earliest use of technology in
qualitative research was when researchers first used tape recorders in
their field
studies to record interview sessions. In one sense this was
clearly an easier way for researchers to keep a record of events
and conversations, but it had two unforeseen consequences.
First, it began to shift the effort of work in making a record
of sessions from the researcher (who traditionally took
handwritten notes) to others, such as secretaries and audio typists.
This separation had an impact not only on how close to (or
distant from) the data the researcher could remain, but also on
the relationship between the data and the emerging analytic
ideas of the researcher. Having a recording and a transcript meant
that new ways of thinking about how the analysis developed out
of the data and how the analysis was supported by the data
became possible. Second, it allowed different kinds of analysis
that could only be undertaken if accurate records of the speech
were kept. This made possible a focus on the small scale and
minute content and characteristics of speech. It also opened
up the possibilities of much larger scale studies and the use
of multiple researchers and analysts.
The dual impact of new technology both on what
kinds of data can be collected and recorded and on what kinds of
analysis it
makes possible has continued to the present day. In the 21st
century, the use of new technology still raises issues like what
should be analysed, how it should be analysed and in what ways
the knowledge and understanding gained are different and more
or less well founded than those gained in more traditional
ways. The papers in this issue address both these impacts of the
technology: new ways of recording and collecting data, and new
ways of undertaking the analysis. Most researchers recognize
that in most cases, the use of new technology usually affects
both.
2. Data Gathering
Audio recording is an analogue technology, as
are film and traditional video. There is a long history of their use in
many
areas of social and psychological research and especially in
anthropology. Recent changes in this technology have taken several
forms. First it has become cheaper and more widespread. This
means that the technology is more available to researchers, but
also that the people being researched are more used to being
recorded by the technology and even familiar with using it themselves.
For example, in the case of video, people are now used to being
recorded whether as part of a "holiday video" or as part of
the now widespread CCTV (Closed Circuit Television) security
systems. They are often familiar with making their own video
recordings and with "reading" the wide variety of video
material they are presented with. Both the cheapness and ubiquity
of the technology mean that there are new opportunities for
researchers not only to record settings but also to use the technology
to create new data. Naturally, the use of such technology
raises issues of interpretation, impact and validity that researchers
need to deal with.
There are two examples in this issue. KANSTRUP (Picture the Practice—Using Photography to Explore Use of Technology Within Teachers Work Practices)
discusses the use of a digital camera in research about the use of
technology in teachers' work practices. Initially she
used the images displayed on a laptop computer as a way of
prompting teachers' discussion about their work practice. However,
she found that they very quickly ignored the pictures and
started more general discussions about their work practices. As
KANSTRUP puts it, "the teachers went beyond rather than into
the photographs". Consequently, she used printed versions of
the photos as the basis of a group discussion amongst the
researchers. Whilst this prompted some creative thinking about teachers'
front and back stage activities, it raised the important
question of whether the researchers' interpretation of the photos
was the same as teachers' actual experience. In fact, as
KANSTRUP concludes, the photos were better as ways of raising questions
than answering them. In a quite different context, KOCH and
ZUMBACH (The Use of Video Analysis Software in Behavior Observation Research: Interaction patterns in task-oriented small groups)
discuss the use of the video analysis software, THEME, to identify
communicative patterns in two distinct examples of task-oriented
small group interaction. They focused on power-related and
support-related behavior as well as verbal and nonverbal patterns
in the behavior. With the software they found two interaction
patterns that it would have been hard to detect without the
use of the software: a clear example of how the use of the
software makes new forms of data and analysis available.
One of the most recent developments in video
and audio has been the rapid introduction of digital technology. Not
only has
this made the technology cheaper and more widely used, but also
it has made possible new ways of manipulating and analyzing
the data collected. This can be seen particularly in digital
video where there is now some excellent software that can be
used to display, examine and edit digital video recording in
ways that are much easier (and cheaper) than non-digital video.
SECRIST, DE KOEYER, BELL, and FOGEL (New Tools for Understanding Infant Development in Qualitative Research)
in their paper in this issue explain how Adobe Premiere, software
usually used in the creative professions to edit video,
was used to create, quickly and reliably, sequences about
infant development. The software makes it possible to rearrange,
present, and navigate through video in ways that were not
possible before. Whereas previously research involved the arduous
creation of written sequence narratives, now using the
software, the researcher could select video clips of only those
behaviors
of interest and quickly inspect the relevant behaviors and
come to analytical conclusions.
The development of information technology and
particularly the growth of the Internet has created not only new ways in
which
researchers can analyse their data, but also created whole new
areas from which data can be collected and ways in which it
can be collected. The former include discussion lists, text
forums, personal Web pages and video conferences. The latter include
usage logs, text content logs as well as digitized recordings.
At its most basic, the Internet, and e-mail in
particular, offers a new way of carrying out the traditional,
qualitative,
face-to-face interview. The advantages and disadvantages of
this and the issues it raises for research are discussed by BAMPTON
and COWTON (The E-Interview). As they point out, one key
advantage here is that there is no need for transcription. Moreover,
the e-interview might enable research about new social
groupings, given that constraints of time, travel and financial
resources
do not apply. However, problems of how to establish and
preserve rapport are created and the authors explore the issues that
arise from the physical remoteness between interviewer and
interviewee and the absence of cues and tacit signs provided by
body language. As they point out, researchers need to be aware
of the speed at which they should reply and at which they can
expect replies from respondents. However, given the necessarily
extended duration of e-interviews, there is no reason why
several respondents cannot be interviewed at the same time. At
the moment too, as they point out, researchers need to be
aware of the biased samples that might result from surveying
only those with good e-mail access. HOLGE-HAZELTON (The Internet—A New Field for Qualitative Inquiry?)
makes similar points based on her research about diabetes sufferers.
She employed a free association interview method
adapted from psychoanalytic therapy and communicated with
respondents using e-mail. Despite dealing with highly personal
and emotionally charged topics, she found that compared with
her earlier, face-to-face interviews, there was a lack of inhibition
and rapport was easily established. However, she did note some
gender differences. Women generally gave quicker and more emotionally
detailed responses. Some authors have pointed to the anonymous
and disembodied nature of electronic communication, however,
HOLGE-HAZELTON found that her respondents often overcame that
by the mutual exchange of personal and demographic details including
pictures of themselves.
KÖRSCHEN, POHL, SCHMITZ and SCHULTE (New Techniques in Qualitative Conversation Analysis: Computer-based Transcription of Video conference)
in this issue, discuss the parallel questions that arise when applying a
conversational approach to video conferences. In
particular, they point out that conventional forms of
transcription fail to take into account the issue of time delays between
sites and the visual information that is also exchanged. For
that reason, they suggest, current multimedia transcription approaches
need to be modified to take into account the specifics of
video conference data and to make them accessible to qualitative
data analysis. They suggest a computer-mediated process of
transcription can be used.
E-mail and videoconferencing clearly involve forms of communication that do not exactly mirror the oral forms found in the
traditional interviews and conversations. In this issue, MOSS and SHANK (Using Qualitative Processes in Computer Technology Research)
consider the broader impact of the Internet on such communications.
They suggest that computer mediated interaction should
be considered as neither oral nor written language, but as a
post-literate transformation of language itself. In particular
they suggest that this transformation can only be properly
studied using qualitative methodologies. They examine this in the
context of an online educational environment and conclude that
online discourse is significantly different from others in
terms of temporarily, the influence of community and
reflexivity. For them, online discourses allow modes of communications
that foster learning in ways that cannot be done in
face-to-face environments.
It is clear that the introduction of new
technology has both expanded the ways in which qualitative researchers
can collect
data and also the settings and situations from which data can
be collected. The other major impact of technology on qualitative
work discussed in this issue has been on how the analysis is
done. Computer assisted qualitative data analysis software (CAQDAS),
a term introduced by FIELDING and LEE as the name of their
networking project refers to the wide range of software now available that supports a variety of analytic styles in qualitative work (LEE & FIELDING 1995).
WOLCOTT in his discussion of qualitative
analysis makes a distinction between analysis that is data management,
in other words,
that is concerned with the more effective handling of data, and
analytic procedures, where features and relationships are
revealed (WOLCOTT 1994). It is the common experience of
researchers carrying out qualitative analysis that such work requires
careful and complex management of large amounts of texts,
codes, memos, notes and so on. The prerequisite of really effective
qualitative analysis, it could be said, is efficient,
consistent and systematic data management. The early programs focused
on data management and those most available now provide
considerable assistance in these activities. The use of such text
retriever and text base manager programs, and related facilities
such as simple searching in CAQDAS, is relatively unpretentious.
In fact many of these aspects of data management do not need
dedicated CAQDAS and much can be achieved with the use of other
commonly available software such as word processors and
databases. Such possibilities are examined by NIDERÖST (Computer-Aided Qualitative Data Analysis with Word)
in this issue. This paper explains how data have to be set up for
analysis and how the table function and "search & replace"
command in Word can be used for basic sorting and retrieving
tasks. An analysis according to data attributes (or variables)
like age, gender, profession, etc. is also possible. There is a
clear advantage in that the software is widely available and
most of its functions are familiar to qualitative researchers.
MEYER, GRUPPE and FRANZ (Microsoft Access for the Analysis of Open-ended Responses in Questionnaires and Interviews)
in this issue provide a similar example, in this case using a database
program to analyse open-ended answers from a survey.
The paper describes the process of entering data into Access
and explains how to set-up and manage code lists and undertake
data retrieval.
These authors clearly demonstrate that it is
possible to use word processors and databases to assist in the analysis
of qualitative
data. This is particularly the case when undertaking initial,
broad-brush examination of the data and when generating simple
counts. However, to go beyond this requires a level of
sophistication with the word processors and databases that most
qualitative
analysts don't have, or have time for. And qualitative analysts
do seem to want to go further. One of the arguments used by
developers to support the "effectiveness" of CAQDAS is based on
the programs' origins—many were designed by qualitative researchers
themselves who claim to know the "real" needs of analysts. This
argument has been increasingly reinforced by the development,
over time, of new program features. The second generation of
CAQDAS, for example, introduced facilities for coding text and
for manipulating, searching and reporting on the text thus
coded. Such code and retrieve software is now at the heart of the
most commonly used programs and has extended the use of the
software into areas much closer to the analytic heart of qualitative
research. In so doing it has brought to the fore contested
issues about how far the software can actually assist with analysis
rather than just with data management. For example, there are
those who remain skeptical about the use of software for the
more analytic aspects of qualitative research. An example is
the paper in this issue by THOMPSON (Reporting the Results of Computer Assisted Analysis of Qualitative Research Data)
where he makes a distinction between the mechanical and conceptual
aspects of analysis similar to WOLCOTT's distinction
of data management and analysis. The mechanical aspects refers
to all the activities that underpin qualitative data analysis,
such as marking up selected text with codes, generating
reports, searching the text for key terms, usages and so on. These
can be time consuming, tedious and error prone and it is these
tasks that the computer can assist well with. However, the
conceptual aspects of analysis, that include reading the text,
interpreting it, creating coding schemes and identifying fruitful
searches and reports, need a human and cannot be done by
machine, he suggests.
Some programs have functions that go well
beyond manipulating, searching and reporting on coded text. They assist
with analytic
procedures by providing a variety of facilities to help the
analyst examine features and relationships in the texts. Such
programs are often referred to as theory builders or model
builders, not because on their own they can build theory, but because
they contain various tools that assist researchers to develop
theoretical ideas and test hypotheses. Such features are characteristic
of what MANGABEIRA refers to as the third generation of CAQDAS
development (MANGABEIRA 1995). Some programs have also extended
the forms of work supported beyond the lone researcher
examining plain text. For instance, some support rich text, diagrams
and the incorporation of images, movies and other multimedia
data. Others have facilities that enable the exchange of data
and analyses between researchers working together
collaboratively. Some papers in this issue examine the new possibilities
here. ZELGER and OBERTRANPACHER (Processing of Verbal Data and Knowledge Representation by GABEK/WinRelan)
show how, using the software they have produced, not only can the range
of data available be coded in the traditional fashion,
but new presentations (e.g. in a visual, tree-based form) can
be produced and a degree of reflexivity can be incorporated
into the analysis. Their method, the holistic processing of
complexity (GABEK) based on the philosophical concept of comprehension
and explanation, is designed to cope with the large, diverse
and often controversial data created in areas such as conflict
studies, organisations, innovation studies and sociology. The
approach is multi-stage. After initial coding, data are assessed,
rated and organised into a conceptual structure, i.e. mind maps
based on the underlying verbal data and linguistic Gestalt.
Furthermore, causal assumptions can be examined in the form of a
complex cause-effect graph that facilitates the analysis
of controversial issues and fosters comparative analyses.
IRON (Collection, Presentation and Analysis of Multimedia Data with Computer),
discusses the issues that arise when qualitative researchers analyse
more than just textual data. There are several questions
about how audio, video and text data integrated together may be
collected and analysed and the paper examines these. It also
discusses the impact on computer-assisted analysis of such
"multimedia" data and suggests that special methods of transcription
may limit the analytic approach. There is therefore a need for
new ways of approaching the analysis of such data. IRION suggests
the application of modular software tools and illustrates his
proposal with an example.
BOURDON (The Integration of Qualitative Data Analysis Software in Research Strategies: Resistances and Possibilities)
discusses how the software can be used to underpin analysis by teams.
In particular he discusses how the use of CAQDAS can
be fully integrated into the research process and how this
integration can support collaborative teamwork and allow the exploration
of analytic dimensions that would be difficult to explore in
other ways. BOURDON examines this in some analysis that used
NVivo and its Merge facility. The latter allowed separately
created computer based analyses to be merged together. He suggests
this is best done using an analysis based on broad themes that
can be agreed and exchanged (using the software facilities)
amongst a team. Whilst this may loose some of the depth and
specificity of the phenomena studied, BOURDON argues that it allows
better exploration of differences between cases and facilitates
the examination of multiple perspectives of the research
team.
The expanding dissemination of CAQDAS along
with the maintenance of a range of different software with different
facilities
and approaches means there are also issues concerned with how
researchers learn to use the programs and which programs they
learn. Two papers in this issue, those by BONG (Debunking Myths in Qualitative Data Analysis)
and by THOMPSON, discuss researchers' first use of the software
(ATLAS.ti and Hyper Qual2 respectively). They examine some
of the issues researchers need to consider when selecting
software and the analytic approach they are going to take. They
also discuss some of the support facilities available to those
new to the software, such as training courses and on-line discussion
lists. CARVAJAL (The Artisan's Tools. Critical Issues when Teaching and Learning CAQDAS)
examines factors that influence the construction of training courses
for those new to qualitative analysis and new to CAQDAS.
He identifies many of the misconceptions that learners have
about the software, for example, that it will do the analysis
for them and that they will learn about qualitative data
analysis by learning the software. He argues for an approach to training
that focuses initially on the aspects of qualitative analysis
that researchers need to understand before they use the software,
and that then examines several different programs. For example,
in one of his classes, he introduced learners to EZ-Text,
win MAX 99, NUD.IST 4, and ATLAS.ti 4.2 so they could appreciate
the different facilities they offer. When starting to use
the software, he suggests it is very important that learners
should be able to analyse their own data set, as it is easier
for them to understand how the research questions that arise
from it can be addressed when using the software.
It is perhaps revealing about the way
qualitative researchers think about themselves and their work that, as
FIELDING points
out, the introduction and use of non-CAQDAS technology has
prompted little comment compared with the intense debates about
CAQDAS (FIELDING 2002, p.161). Concerns about the limitations
of CAQDAS and its impact on the kinds of analysis that can be
undertaken and their quality are reflected in several of the
papers in this issue.
In their recent book, FIELDING and LEE examine
the history of the development of qualitative research and its support
by computers
in the light of the experience of those interviewed in their
study of researchers using CAQDAS (FIELDING & LEE 1998). Among-st
the issues they identify is a feeling of being distant from the
data. Researchers using paper-based analysis felt they were
closer to the words of their respondents or to their field
notes than if they used computers. It was certainly true that some
of the early software made it hard to track back from extracted
text to the context in the original documents from which it
came. But most programs now emphasis their facilities for the
contextualization of data. Another complaint, as many users
and commentators, including several in this issue, have
suggested, is that some software seems too influenced by grounded
theory. This approach, developed by GLASER and STRAUSS (1967),
has become very popular amongst both qualitative researchers
and software developers. The worry is that this may push
analysis in one direction rather than another, that some aspects
of the analysis might be an artifact of the technology used.
Whilst this was a convincing argument about some of the early
versions of current programs, as FIELDING and LEE (1998) point
out, most software is equally influenced. Besides, as programs
have become more sophisticated and flexible, they have become
less connected to any one analytic approach. A related danger
that some have pointed to is the over-emphasis on code and
retrieve approaches which may militate against analysts who wish
to use quite different techniques (such as hyperlinking) to
analyse their data. That grounded theory has become a kind of
paradigm in qualitative analysis and that coding alone is
analysis are two "myths" of qualitative data analysis that BONG,
in this issue, seeks to debunk.
On the other hand, there are those who remain skeptical about the overall philosophical position represented by the use of
software for qualitative data analysis. An example is the paper in this edition by ROBERTS and WILSON (ICT and the Research Process: Issues Around the Compatibility of Technology with Qualitative Data Analysis).
They argue that the central activity of qualitative analysis is the
interpretation of the various shades of meaning found
in conversational and linguistic material. Computers, founded
as they are on a digital and quantitative view of the world,
are limited in how far they can help with such an
interpretation. For ROBERTS and WILSON, there is no clear distinction
between
understanding and interpretation on the one hand and analysis
on the other. Since, with general agreement, there are limits
to a computer understanding or interpretation of texts, so too,
they argue, our analysis is little assisted by software outside
purely mechanical tasks such as data management. For them,
creating and applying codes is not analysis. Not everyone will
agree with such views, least of all those following a grounded
theory or template approach, but they are ones that are often
expressed by qualitative analysts coming from a background in
narrative or discourse analysis who often reject absolutist,
deductive and positivist approaches. A similar case for the
importance of the interpretation of meaning is made by MOSS
and SHANK in this issue, when they argue for analysis by "close
reading", a quasi-literary approach, rather than by coding.
This, they suggest, is because it is important to discover
embedded patterns and not to miss infrequent but significant instances
of insight.
A similar caution about the limits of CAQDAS in
analysis is made by THOMPSON. He presents an example of analysis with
Hyper Qual 2
and attempts to provide a model of how to write about the
analytic process. His main argument is that the strength of the
analysis depends to a large extent on the well-established
strategies used in analyzing qualitative research data. Nevertheless,
this is a "taken for granted" assumption shared by all of the
more experienced CAQDAS users. As summarized above, THOMPSON
distinguishes the mechanical and conceptual aspects of analysis
and agues that whilst computers can help with the mechanical,
only humans can undertake the conceptual.
The evocation of human reasoning as the core of
qualitative analysis raises issues regarding representations about
technological
artifacts. There is an interesting tension between developers'
claims about CAQDAS capabilities and the meanings attributed
to them by users, in particular settings. While not a skeptic
of CAQDAS use, MANGABEIRA (1996) has pointed to ways in which
users' explanations about CAQDAS uptake as well as their
attributions of what CAQDAS "can do" not only depend on software
features and capabilities but are also shaped by collective
representations of their "effectiveness", the social organisation
of research communities and national intellectual traditions.
In a similar vein, other commentators have noted that CAQDAS
has become very successful, not always for the best of reasons
(FIELDING 2002; SALE 2002b). There has been a tendency for
researchers to try to give their proposals some kind of gloss
of rigour by suggesting in research bids that the data will
be analysed using a CAQDAS program. It is as if the use of
software will somehow alone improve the quality of their work.
Of course, CAQDAS cannot do that. It is just a tool for
analysis, and good qualitative analysis still relies on good analytic
work by a careful human researcher, in the same way that good
writing is not guaranteed by the use of a word processor.
Much of the thinking about the quality of
research in general originates in ideas derived from the examination of
quantitative
research. Here there is a strong emphasis on ensuring the
validity, reliability and generalization of results so that we
can be sure about the true causes of the effects observed.
There has been much debate about whether such ideas can be applied
to qualitative data and, if they are applicable, what
techniques might be available to qualitative researchers to help ensure
the quality of their analysis.
The issues of quality in qualitative research have been tackled in part by recognizing that, in the
absence of the techniques available to quantitative researchers,
qualitative analysts have to pay more attention to how they
write about their data and present their reports. Another
response by those undertaking qualitative analysis has been to
focus on the possible threats to quality that arise in the
process of analysis. There is a variety of such threats, including
biased transcription and interpretation, the overemphasis of
positive cases, a focus on the exotic or unusual, the ignoring
of negative cases, vague definitions of concepts (or codes),
inconsistent application of such concepts to the data and unwarranted
generalization. As DEY warns,
"Because the data are voluminous, we have to be selective—and we can select out the data that doesn't suit. Because the data are complex, we have to rely more on imagination, insight and intuition—and we can quickly leap to the wrong conclusions" (DEY 1993, p.222).
It is therefore not surprising that it is easy
to produce partial and biased analyses. The use of CAQDAS can make a
positive
contribution here, not least, as FIELDING points out (2002)
because it takes away much of the sheer tedium of qualitative
analysis. Using the software it is easier to be exhaustive in
analysis and to check for negative cases and there are some
techniques for ensuring that text has been coded in consistent
and well-defined ways.
Another advantage of using software is that the
analysis is structured and its progress can be recorded as it develops.
Establishing
an audit trial of this kind to show how analytic ideas emerged
and to check that they are not subject to the kinds of biases
mentioned above can be done using CAQDAS, but rarely is. Often
the level of analysis undertaken is disarmingly simple, as
SEALE found when he surveyed published papers that mentioned
the use of CAQDAS (SEALE 2002b). In many cases the analysis was
little more than pattern analysis based on simple code and
retrieve even when authors claimed to be using grounded theory.
In some cases the research showed little real analytic depth
and the analysis tended to be impressionistic and of dubious
reliability or validity. There is clearly still a gap between
the potential role of CAQDAS in assisting the quality of research
and actual practice.
Not everyone agrees that the advantages of CAQDAS in supporting the quality of research are clear-cut. WELSH (Dealing with Data: Using NVivo in the Qualitative Data Analysis Process)
in this issue is clear that it is not possible to eliminate the role of
the human researcher in the analytic process. She
agrees that using the search tool in CAQDAS can "improve the
rigour of the analysis process by validating (or not) some of
the researcher's own impressions of the data." However, the
software is less useful in addressing issues of validity and reliability
in the thematic ideas that emerge during data analysis because
of the fluid and creative way the themes emerge. WELSH therefore
argues that the analyst should not abandon manual methods of
analysis. This may be the only way of examining the thematic
ideas and gaining a deep understanding of the data.
Qualitative researchers have also found it
contentious the ease with which CAQDAS can produce quantitative data and
link with
statistical analysis programs. There is clearly value in being
able to add quantitative parameters to generalizations made
in analysis. But some feel that the distinctive nature of
qualitative research may be threatened. However, researchers working
in applied settings are often under pressure to combine
qualitative and quantitative analyses. MEYER, GRUPPE and FRANZ (Microsoft Access for the Analysis of Open-ended Responses in Questionnaires and Interviews)
in their use of a database to analyse open-ended answers from a survey
suggest that an advantage of their approach is the
ability to keep a close relationship between the qualitative
data and the quantitative data kept in a statistical package.
This too, is a point of some contention amongst qualitative
researchers. For some, numbers and statistics have little relevance
to qualitative analysis. For them it is the distinctive and
novel analysis that qualitative approaches can produce that are
important. For many other researchers, often those working in
applied and evaluation settings, the ability to link qualitative
analysis with quantitative and statistical results and to
support their qualitative analytical ideas with numeric evidence
is important.
As we have discussed above, one of the recent
changes in the technology that qualitative researchers deal with is that
it
is now almost all in a digital format. This is what some have
referred to as digital convergence and it means that a range
of new approaches both to data collection and to data analysis
are now possible. BROWN (Going Digital and Staying Qualitative: Some Alternative Strategies for Digitizing the Qualitative Research Process)
in this issue explores the kinds of technology now available, that
means that qualitative researchers can now consider collecting,
analyzing, reporting and archiving materials in a digital
format. He examines some of the software available that makes the
storage and accessing of such material possible for qualitative
researchers. In particular he discusses how by using formats
such as HTML and PDF researchers can link together a wide range
of materials, both collected data and research notes and a
variety of media types.
Such a convergence will no doubt push analysts
into realizing that new forms of analysis are both possible and
necessary.
There is already a great interest in the visual aspects of
culture and in the importance of embodiment in understanding human
actions. The ready availability of audio, video and still
images is likely to encourage analysts to examine aspects of this
which were hard to record and use as evidence before. Whether
analysts will need new CAQDAS or whether they will choose to
use tools that are common in, for instance, the creative media,
remains to be seen. Already some CAQDAS programs like Hyper Research,
ATLAS.ti, CI-SAID and the Qualitative Analyse allow
researchers to code images, digitize speech and video. Some, like NVivo
and ATLAS.ti allow video segments to be hyperlinked in a
limited way. Such programs are close to providing the richness and
fine detail available to text coding for the coding of sound
and video. Whatever researchers' choice, digital convergence
will probably reinforce the demand from users for universal,
standard data formats, so that files can be easily transferred
from one software package to another and even from one CAQDAS
program to another. Already, several CAQDAS programs allow the
import of RTF, AIFF, WAV, PIC, GIF and MPEG files and Tatoe and
ATLAS.ti are using XML and HTML as a medium for exporting
text data files.
Retaining rich multimedia data, for instance as
examples in research reports, raises forcefully ethical issues like
anonymity,
ownership and confidentiality. The widespread use of tape
recorders in research many decades ago did not immediately prompt
researchers to publish audio versions of their analyses nor to
archive the recordings. In the main, and for good reasons of
confidentiality etc. researchers transcribed and published only
as text. Though there is now greater interest (and incentive
from funding bodies) in archiving qualitative data, there seems
little pressure to archive original audio and video recordings.
This may be simply because copying, cleaning and anonymising
analogue recordings is too time consuming. The move to digital
media might help here. Moreover, researchers using archived
data seem to want it in as unprocessed a form as possible, so
perhaps this will push depositors to archive their original
video and audio recordings.
At the same time there are emerging digital standards that might have a positive influence on both the ease with which data
can be archived and the ways in which they might be analysed. CARMICHAEL (Extensible Markup Language and Qualitative Data Analysis)
in this issue discusses one of the most important developments, XML.
This is now a widely accepted and used standard way
of marking up text (and by external referencing, other media)
that identifies the type of content. Although in appearance
XML is very like HTML, the language used for describing Web
pages, as CARMICHAEL points out, HTML mixes in descriptions of
how to display the data visually, whereas XML does not do this.
In XML the focus is on just identifying the type of content
leaving unspecified how the data should be displayed. For
CARMICHAEL XML not only offers an excellent way of marking up
qualitative
data in archives but because of the wide availability of tools
for processing such text it offers new ways in which researchers
can undertake analysis. In particular, as he points out, data,
analysis and researchers can all be distributed on a network.
At the moment browsers capable of displaying XML text are not
well suited for the process of marking up, but as existing CAQDAS
programs, and other software, start to import and export in XML
format, this situation is likely to change.
The further development of CAQDAS will probably
occur in two ways, how the software is used and the forms of analysis
it supports.
As FIELDING notes, in his contribution to a recent collection
on qualitative research, CAQDAS is still treated as a kind of
optional, add-on extra in qualitative research (FIELDING 2002).
It is still not seen as a core part of all qualitative analysis
activity. One piece of evidence for this, says FIELDING, is the
way that books on qualitative research still contain a separate
chapter or section dedicated to CAQDAS rather than integrating
its use into all discussions of analysis. Perhaps in part this
reflects the lack of full recognition for CAQDAS from
practitioners. Certainly some CAQDAS users have had to face hostility
to the software from managers and supervisors (FIELDING &
LEE 1998). This might also help explain the limited use of CAQDAS
functions in reported research. Most published descriptions of
the use of CAQDAS seem to have used the software just for coding
and simple retrieval (SEALE 2002b). The underlying logic of
coding and retrieval and even of searching for coded segments
is little different from manual techniques. To this extent,
therefore, most users of CAQDAS have made little conceptual advance
over the indexing of typed notes and transcripts by marking
them with code words or colored ink. FIELDING and LEE found the
same in their survey of CAQDAS users in the UK (FIELDING &
LEE 1998). Most only used the basic features either because working
in an applied field they were under pressure from sponsors to
produce results quickly or because there was little support
in universities for such software and they found it hard to
learn the program features. Nevertheless, there is some innovative
use of the software to be found. For example, FRIESE used the
visual modelling facilities in ATLAS.ti to examine and illustrate
the varied and idiosyncratic influences on customer's impulse
buying (FRIESE 2000). A contrasting example is the recent analysis
of electronic news articles on cancer sufferers by SEALE (2001;
2002a). The data source was electronic (no transcription required!)
but needed pre-processing for which he used a custom-written,
Visual Basic program. Straightforward coding was done with NVivo
but the analysis was supported by the use of a concordance
generator. This kind of integrated use of software might be another
pointer for the future, though this will depend in part on the
ability to import and export data easily. With better support
for learning about the software, and recognition by sponsors of
the value of deeper analysis, there is hope that a greater
range of programs and program facilities will be used. Perhaps
then CAQDAS will be treated as a necessary part of proper QDA
training and a core activity in almost any qualitative research
project.
Most CAQDAS use and most of the popular
programs are based around a code and retrieve underpinning, with some
search facilities,
but the kinds of analysis the programs support is still
expanding. Software like ATLAS.ti and NVivo include facilities for
visual and hierarchical modelling of concepts and codes. Others
take a much more numerical and logical approach to modelling,
often built around a hypothesis testing or case-based approach
(as opposed to a code-based approach). Examples include Hyper RESEARCH,
Ethno and AQUAD Five. (For a recent discussion see FIELDING
2002.) Yet another approach beyond the coding model is provided
by the work of KOCH and ZUMBACH in this issue, that combines
both a numerical and qualitative approach to analyzing behavior.
However, there are some forms of qualitative research where
there is little use of CAQDAS. This is true of approaches like
narrative, conversation analysis, biography and discourse
analysis. The most likely reason is that current programs give little
support to the special forms of transcription needed and/or
they poorly support the chronological dimension. The extension
of software functions to include such features is not
difficult. As program features are expanded and enhanced in future
versions,
possibly to a fourth generation of software, we can expect to
see some of these approaches incorporated into mainstream programs
and their use by researchers. One innovation that seems likely
is the development of functions to assist with coding. At the
moment some programs allow automatic coding based on the markup
of documents and it is possible to use search facilities (sometimes
by incorporating powerful tools such as GREP) to help find text
for coding. But in the future this might be further assisted
by integration with concordance generators and thesauruses so
that the software can search in intelligent ways for similar
text and even for negative cases. There is already some
software in development in the US (Qualrus) that uses artificial
intelligence
to examine the way users have already coded text in order to
find further text to code.
Most of such innovations are simply ways of
helping with analysis that can already be done using manual techniques
and analogue
machinery. Computer use is simply a way of doing things more
easily or with greater confidence and transparency. The acid
test for the deep acceptance of CAQDAS will be when researchers
start using facilities in the software to carry out analysis
that they couldn't possibly have considered, using traditional,
manual techniques. This is already happening to some extent
because some researchers are analyzing much larger data sets
than they could ever have countenanced before CAQDAS. (Although,
interestingly, FIELDING & LEE [1998] found that the average
number of interviews or documents CAQDAS users were working with
had stayed at around 40.) There is also evidence that CAQDAS
users undertake analysis in ways that are different from what
researchers did before the software was available. In some
cases, it might be argued, this is an artifact of the program design,
as for example, when researchers produce a multilevel hierarchy
of nodes because the software supports it. However, in other
cases the change seems to have arisen because researchers no
longer need to keep to habits that were only necessary because
they used paper-based transcriptions. For example, most CAQDAS
users do not use line numbers in their analysis. In contrast
researchers who learned their craft before computers often
stick with such old habits, even when using CAQDAS, because they
were a necessary feature of paper-based analyses.
We shall know the use of new technology in
qualitative research has really arrived when researchers use new forms
of data
and new types of analysis that hadn't even been thought of in
the pencil and paper past. Whether this has happened is contentious,
but we think that the papers in this issue provide sufficient
evidence to support the view that new technology has allowed
the investigation of truly novel data types and the use of new
and distinctive forms of analysis.
The Use of New Technology in Qualitative Research
Reviewed by Muhammad Umar
on
September 07, 2015
Rating:
No comments:
Post a Comment