Analyse qualitative data
"Researchers need to focus on ways in which the actors order their own world, and avoid counting everything." (Silverman, 2004, p. 181)
This sums up two ways in which the approach to data differs in qualitative research from that generated by quantitative investigation. The researcher is not dealing with numbers which can be crunched; neither is he or she dealing with an absolutely literal interpretation of the world. Instead, the researcher needs to use intuition, imagination and interpretation.
How qualitative & quantitative data differ
The process of quantitative research is linear: the researcher will start out with a theory, design a research process, collect data, analyse it and then review findings to see whether or not they support the hypothesis suggested by the theory.
In qualitative research, the process is much more iterative and inductive. The researcher will start out with a question or issue, collect data, analyse the data they have collected, start to formulate theory, go back and look at, or even collect, more data.
With quantitative research, the researcher will normally decide on the method of analysis, including statistical technique, before even data collection starts. In qualitative research, however, the process is a lot more messy, and it's common for the theory, design, collection and analysis phases to overlap.
"In qualitative research, sticking with the original research design can be a sign of inadequate data analysis, not consistency." (Silverman, 2004, p. 152)
Nor can everything be transformed to numbers, as with quantitative data. There is no common ground, and the researcher will amass large amounts of data in many different forms. Analysis therefore needs to begin with the data in its raw state, acknowledging that it may have come from various different methods of collection such as interviews, focus groups, documents, or images.
Each piece of data, then, needs to be approached in its own terms, and meaning extracted – which may need to be negotiated through the lens of the cultural context in which the author is operating.
An ethnographic view of data: negotiation of meaning
In the scientific view, which is the dominant paradigm for quantitative research, reality exists independently and data can be collected to represent it. The researcher's task is to structure the data collection process so that the data represents the truth. For example, if the researcher wants to find out the most important factors sought in a washing powder, they need to formulate the questions in such a way that all the possibilities are catered for.
The collection and analysis of qualitative data, however, is dominated by the ethnographic paradigm. Ethnographers are concerned to interpret data according to the social world of their participants. Organizations, for example, have their own value systems which will be reflected in the language and the images used both by individuals and in collective statements. For this reason, it is not always possible to take data at face value.
Silverman (2001, p. 134) gives a couple of examples here:
"Notes on candidates for job interviews are grouped according to a number of headings – name, appearance, acceptability, confidence, effort, organization, motivation – omitting ability."
"Groupings of statistics often reflect a way of organizing information that in turn reflects cultural perceptions – for example, at some times, men are more likely to have their deaths regarded as unnatural than are women."
Silverman, D. (2001), Interpreting Qualitative Data: Methods for Analysing Talk, Text and Interaction, Sage Publications, Thousand Oaks, CA.
Silverman, D. (2004), Doing Qualitative Research, 2nd Edition, Sage Publications, Thousand Oaks, CA.
However, it is useful to briefly remind ourselves here, before we go on to look at the detail of data analysis, of the principles covering data collection.
Qualitative data divides broadly speaking into two main categories:
- That which is collected by the researcher, through interviews, focus groups, or ethnographic field observation.
- That which exists in data form prior to the research – for example, public documents, statistics, e-mails, etc.
The second category of data will have already been recorded so will not present major challenges with regard to collection and management. We therefore concentrate here mainly on issues with regard to the former catagory.
There are two views of interviews, and which you take will depend upon, and affect, the status of the data which you end up with – whether you believe you have objective facts about the world, or subjective perceptions or narratives. Which view you take will affect how you structure the interview:
- The positivist view maintains that interviews give data which are 'facts' about the world. To collect this sort of data, it is best to ask questions in standard format, worded in the same way, which will enable you to quantify the responses.
- The constructionist, emotionalist view maintains that interviewees construct their own, more subjective view of reality, their own narrative of events. This type of data is best collected by unstructured, open ended interviews.
It is almost always better to record interviews and to work from transcripts, for two reasons:
- It is not always possible to rely on one's memory of conversations.
- Tapes constitute a public record, which cannot be disputed, and which can if necessary be reanalysed by others, with different questions/theories in mind.
Collecting data in the field, for example in the course of participant observation, is a highly skilled business. You are not merely making a record, but interpreting what you can see and hear, so that you are collecting and analysing data at the same time:
- Your notes made at the time, which will necessarily be brief
- Expanded notes made as soon as possible after the observation
- A field work journal which looks at problems and ideas
- A 'running record' of analysis and interpretation
Silverman (2004), based on Spradley, states:
"Memos or contact sheets made after each observation, covering people, events, situations, themes, interpretations, research questions, hypotheses, and suggesting the focus of the next observation."
Silverman (2004), based on Spradley and Miles and Huberman, states:
"It is common for researchers to record what they hear and not what they see, although the latter is very important – the layout of a shop or restaurant, the size of workspace in an office, the care taken (or not) to avoid hazards in a factory etc."
Some general principles for managing data
You need to record all your data in an organized way so that you have an audit trail of:
- When it was collected
- In what form, i.e. document, interview, etc.
- What it refers to, i.e. person, company, etc.
Data should also be reliable: it should form an accurate a record as possible. See above notes on using transcripts, and also making notes as you observe (short) and as soon as possible afterwards.
Finally, you need to retain all your notes, transcripts etc. so that you have a complete record of all your research. This is important whether you are preparing an undergraduate student dissertation, a doctoral thesis, or a report for funded research.
Analysing as you collect
As we saw in the section on introductory considerations, qualitative research differs from quantitative in being non linear, with the activities of data collection and analysis intertwined. Most researchers advocate starting some coding before all the data comes in, for two reasons:
- You avoid 'drowning in data' – qualitative research can generate voluminous data, and the researcher can be faced with literally 100s, even 1,000s of pages.
- You get to develop your analysis – concepts and themes start to emerge, and if you have decided to use a particular method, such as content or discouse analysis, you have a chance to see how that will work, and whether it might be better to adopt another approach.
Thus when you get a certain way through your collection, say after the first few interviews or first major site visit, you could make your initial analysis. The next section, Carrying out the analysis, goes into more detail on methodology.
Silverman, D. (2004), Doing Qualitative Research, 2nd Edition, Sage Publications, Thousand Oaks, CA.
Carrying out the analysis
In this section, we shall look in general terms at the process of analysis, and the techniques used. Qualitative researchers often use specific methods, such as content analysis, narrative analysis, grounded theory, etc., and we shall be examining these in greater detail in the next section.
Coding is the heart of the analysis process – once you have your codes, you can start to mark up the texts, transcripts, or whatever you are using, look at emergent themes and subthemes and begin to build theory. These are the main issues you need to be concerned with:
You need to decide on some sort of way of obtaining a sample of your texts. You could use either random or purposive sampling; if the latter, then you could choose samples that are typical, atypical or deviant, or that illustrate a maximum number of variables.
Obtaining a unit of analysis
The first task when you have your sample is to consider how you are going to break the text down: what will your unit of analysis be? There are a number of possibilities:
- research sections: whole interviews, responses to interview questions
- grammatical: sentences, paragraphs, etc.
- formatting: lines, page
- thematic: themes, ideas
"Corporate self-presentation on the WWW: strategies for enhancing usability, credibility and utility" (Pollach, Corporate Communications: An International Journal, Vol. 10 No. 4) uses the "About us" page as their unit of analysis (see their section 3.2, "Unit of analysis", on how they justify this).
Finding themes and concepts
First of all, you will need to become thoroughly familiar with your material. Certain key ideas, patterns, themes etc. will probably begin to emerge as you collect your data – remember, collecting and analysis can be parallel activities.
You can familiarize yourself with the material by studying it in different ways – for example, not just reading it line by line, but also dipping in and out, looking at what isn't there – for example, pauses, questions avoided etc.
The process of coding occurs when you translate the key ideas into more abstract concepts, which will become your coding variables, or the labels for the phenomena occuring in the text. For example, you may be interviewing people about their response to a restructuring, and a recurrent theme may be fear of an increasing workload. You will need to give these variables code names, whilst remaining aware of subtle but significant differences, and distinguishing them in the coding (for example, fear of increased workload could be "fear work", fear of longer hours could be "fear hours").
The above process is known as open coding – the categories are allowed to emerge from a detailed scrutiny of the data. The next stage is to look at the relationships between the codes that label the categories, for example, you could look at cause and effect. Thus in the above example, the cause of fear (of increasing workload/longer hours) may be belief that the restructuring may involve fewer staff. This is known as axial coding. (You can also use graphical techniques, for example, mind maps, influence diagrams, or logic diagrams, to look at relationships between codes.) Finally comes selective coding – when the researcher tries to find the 'story', and looks for core categories and fits other things round them.
"A grounded theory for resistance to change in a small organization" (Macrì et al., Journal of Organizational Change Management, Vol. 15 No. 3), contains a detailed account of the coding process, using the approach described above.
"Using grounded theory to model visitor experiences at heritage sites: methodological and practical issues" (Daengbuppha et al., Qualitative Market Research: An International Journal, Vol. 9 No. 4) contains a number of diagrams (Figures 3-8) which illustrate the different ways of coding and the process of deriving categories.
Sometimes, researchers prefer to use a more structured approach than that outlined above, taking a particular set of concepts from the literature, particularly if they are up against time pressures (as in a student project). Content analysis is one example of where this is done.
Building a code book
This is an organized list of codes, as a reference. There are a number of ways of developing this; one is to include:
- a detailed description of each code
- inclusion and exclusion criteria
- examples, with real text (if a code is particularly abstract, you could include examples of what it does not include)
Needless to say, if there is more than one researcher on the project, the codes need to be agreed between them.
Marking the text
Once you have your codes, you are ready to start marking the text. You can tag (which can be done with software – see below) particular bits of text for later indexing. You can also mark the codes manually against your transcript, for example having a separate column for the codes.
Once you begin to see the relationships between codes, you begin to identify of a pattern, which can form the basis for a theoretical model.
Once the model is constructed, it should be constantly tested, particularly against cases which disprove it. This is the essence of the iterative, cyclical nature of qualitative research. Many researchers use a method derived from Kolb's learning cycle, where reflexion, conceptualization and experimentation follow experience, i.e. data.
As you get ideas on the theory, write memos to yourself – these could be just keywords on postits or longer documents outlining a particular thesis.
In "Strategic marketing planning: a grounded investigation" (European Journal of Marketing, Vol. 37 No. 3/4), Ashill et al. describe a three-stage analysis phase:
- Breaking down interview transcripts into "thought units", which "ranged from a phrase to several sentences".
- The organization of thought units into categories in an attempt to "capture the perceived communality or shared message amongst the thought units", with disagreements between researchers being resolved by use of an independent researcher who was not familiar with the literature to do a content analysis which was compared.
- The categories emerged into "seven unifying themes or 'core categories' that provide a summary of 'what is going on' in the data".
"Online learning dialogues in learning through work" (Sara Bosley and David Young, Journal of Workplace Learning, Vol. 18 No. 6) describes the process of developing and reforming categories from codes as the analysis proceeds.
Researchers have been using software for qualitative analysis since the 1980s, with specific programmes being developed by the mid-1990s and rapidly becoming more and more sophisticated. The difference between this software, known as Computer-Assisted Qualitative Data Analysis Software (CAQDAS) and the software programmes used in quantitative research – SPSS etc. – is that they cannot actually carry out the analysis. However they can help you manage the data more efficiently, and it is here that they are probably most valuable.
Some of the most well-know programmes are: Ethograph, QSR NVivo, winMAX, ATLAS/ti, and NUD.IST.
These are some of the things that CAQDAS software can help you with:
- Making notes, writing up, editing, and writing reflective commentaries (memos).
- Storing data – it can be very helpful, given the amount of data which you will probably collect, to have it all brought together.
- Indexing and retrieving – themes are identified, grouped so that categories emerge, then you can tag the relevant text, and compare same-tagged texts.
- Seeing the bigger picture – it can provide pictoral representations of data.
- Linking – connecting relevant data.
However, the drawback is that it takes some time to learn the software, so you need to ensure that your research project is large enough to justify the opportunity costs, not to mention the actual cost of the software.
"Analysing qualitative data: computer software and the market research practitioner" (Pauline Maclaran and Miriam Catterall, Qualitative Market Research: An International Journal, Vol. 5 No. 1) provides a description of CAQDAS software and its main features. It concentrates on NUD*IST and provides an example of its graphic display capability.
Assessing your analysis
How do you know when you have completed your analysis, and then how do you know that it is reliable, valid and generalizable?
Because of the more open ended nature of qualitative analysis, it can be more difficult to know when analysis is complete. The main (and unquantifiable) ways of knowing are:
- when analysis no longer adds anything new
- when you have answered the question you set out to ask.
It would be hard to repeat conditions exactly, and no two researchers would, for example, have the same conversations or observe the same issues. However, the main points to emerge should be the same if you were to repeat the research or if someone else were to tackle it.
Can you show evidence of rigour in your data analysis, including how you got from your data to your conclusions? Have you eliminated bias, in particular that which those you are observing want you to see? Is the evidence sufficient?
What you have seen may not apply universally, but are there other pieces of work which show similar findings?
Some specific analytic techniques
In this section we shall look at some of the specific techniques which researchers use to analyse qualitative data. Note that these methods are not mutually exclusive and can be combined.
As its name implies, grounded theory involves grounding the analysis in the experience that provides the data, whether this originates from interviews, participant observation, or other method. A sample of text of transcripts or notes are read closely, and emergent themes noted in a process known as open coding (see earlier), with a view to understanding the issues which are most important to the research subjects. The data is approached with the minimum of preconceptions and the literature is often only studied after initial theory building has begun.
As categories emerge, these are built into theoretical models, more data may be collected, and the findings compared with the literature. A variety of methods are used including the constant comparison method, which involves comparisons between individual sources of data, and between the data and the literature. This provides useful triangulation.
Grounded theory is thus classic inductive research, in that data collection, analysis and theorizing is iterative.
Accounts of grounded theory can be found in the following Emerald articles, some of which also describe research informed by the method:
"A grounded theory for resistance to change in a small organization", Macrì et al., Journal of Organizational Change Management, Vol. 15 No. 3, pp. 292-310.
"Intransivities of managerial decisions: a grounded theory case",
David Douglas, Management Decision, Vol. 44 No. 2, pp. 259-75.
"Using grounded theory to model visitor experiences at heritage sites: Methodological and practical issues", Daengbuppha et al., Qualitative Market Research: An International Journal, Vol. 9 No. 4, pp. 367-88.
"Grounded theory, ethnography and phenomenology: a comparative analysis of three qualitative strategies for marketing research", Christina Goulding, European Journal of Marketing, Vol. 39 No. 3/4 pp. 294-308.
"Grounded theory in sales research: an investigation of salespeople’s client relationships", Susi Geiger and Darach Turley, Journal of Business & Industrial Marketing, Vol. 18 No. 6/7, pp. 580-94.
"Grounded theory methodology and practitioner reflexivity in TQM research", Denis Leonard and Rodney McAdam, International Journal of Quality & Reliability Management, Vol. 18 No. 2, pp. 180-94.
Content analysis analyses texts by reducing them to a unit by variable matrix, according to a set of variables which have already been isolated. It is particularly useful if a great deal is known about the subject already, and the categories already established. In some instances, themes are quantified and the results expressed statistically.
Note that the following examples of use of content analysis in Emerald articles all take the web as their research field, where evaluation methods are well established.
"Corporate self-presentation on the WWW: strategies for enhancing usability, credibility and utility" (Irene Pollach, Corporate Communications: An International Journal, Vol. 10 No. 4) uses content analysis (followed by discourse analysis – see below) to examine how companies use language and hypertext to create a favourable image of themselves, using a framework from systemic functional linguistics. See 'analytical framework' for how they justify their choice.
"Corporate reporting on the Internet: audit issues and content analysis of practices" (Fisher et al., Managerial Auditing Journal, Vol. 19 No. 3) describes use of content analysis to investigate corporate reporting against certain criteria.
Note, however, that the term "content analysis" can also be used generically to refer to the analysis undertaken for qualitative data, which should more properly be referred to as discourse analysis or narrative analysis, and which will be examined in more detail below.
This is one of the main methods used to analyse texts, and involves looking at language in its context, the idea being that particular communities, be they social, disciplinary, cultural or organizational, give language a distinct meaning to describe their experiences. "Discourse" is defined as "[a] set of meanings, metaphors, representations, images, stories and statements which together produce a particular version of the world" (Bergland and Johansson, 2007, quoting Foucault (1993), Laclau and Mouffe (1985) and Burr (1995).
"A discourse analysis technique for charting the flow of micro-information behavior" (Diane Nahl, Journal of Documentation , Vol. 63 No. 3) uses discourse analysis to analyze information behaviors according to "the three domains of behavior that have been investigated in psychology and education for several decades: the affective domain, the cognitive domain and the sensorimotor domain".
"Constructions of entrepreneurship: a discourse analysis of academic publications" (Karin Berglund and Anders W. Johansson, Journal of Enterprising Communities: People and Places in the Global Economy, Vol. 1 No. 1) discusses discourse analysis and the reasons for its use under "Methodology".
"Pairing information with poverty: traces of development discourse in LIS", (Jutta Haider and David Bawden, New Library World , Vol. 107 No. 9/10), looks at Foucault's ideas of discourse.
"Reporting CSR – what and how to say it?" (Anne Neilsen and Christa Thomsen, Corporate Communications: An International Journal , Vol. 12 No. 1) discusses Fairclough's model.
Narrative analysis looks at texts, conversations and interviews as narratives which describe subjects' experiences, the idea being that these narratives are influenced – and modified – by social processes.
"Intense, vigorous, soft and fun: identity work and the international MBA" (Nic Beech, critical perspectives on international business , Vol. 2 No. 1) describes this method – see "Methodology" section.
This method of analysis looks at dialogue and in particular, the roles and identities taken on by participants (e.g. in student dialogue one student may 'teach' another), often working back from various 'outcomes' such as laughter or a request for clarification.
"Embodying experience: a video-based examination of visitors' conduct and interaction in museums" (Dirk von Lehn, European Journal of Marketing , Vol. 40 No. 11/12) uses this technique in relation to videos of people's behaviour in museums.
This involves rigorous testing of a hypothesis. As such, its importance in qualitative research is that it can increase rigour, and is therefore valuable in theory building.
Useful accounts of this method are found in:
"Towards rigour in action research: a case study in marketing planning" (Hugh N. Wilson, European Journal of Marketing , Vol. 38 No. 3/4) – see "The promise of analytic induction in action research" section.
"Towards a map of marketing information systems: an inductive study" (Daniel et al., European Journal of Marketing , Vol. 37 No. 5/6).
A method which looks at the culturally determined meaning encoded in signs and symbols.
"Thinking the thoughts they do: symbolism and meaning in the consumer experience of the 'British pub'" (Clarke et al., Qualitative Market Research: An International Journal , Vol. 1 No. 3) describes this approach and its use in consumer research.
A method, originally developed to study the Bible, used to analyse texts, often used to examine documents in general.
"Metaphorical mediation of organizational change across space and time" (Jeff Waistell, Journal of Organizational Change Management , Vol. 19 No. 5) provides a description of this method and its use to study texts.
"Hermeneutics as a bridge between the modern and the postmodern in library and information science" (Joacim Hansson, Journal of Documentation, Vol. 61 No. 1) describes its role in LIS.