How to... implement grounded theory

Options:     Print Version - How to... implement grounded theory, part 1 Print view

Origins and features of GT

The popularity of grounded theory (GT) in particular, and of qualitative methods in general, is part of a trend away from total reliance on positivism towards a more interpretive view. Part of its attraction is that in offering a structured approach to the collecting and coding of data, as well as rigorous theory building, it can refute the claim that qualitative research lacks rigour.

A problem, however, is that GT itself is often applied rather haphazardly, and used as a general term for qualitative data analysis, or as a research technique, whereas it is more properly a methodology.

GT was first introduced in 1967 by two sociologists, Barney Glaser and Anselm Strauss. Describing it as a "general, inductive and interpretive research method", Mansourian (2008) quotes a couple of definitions from both authors:

"Grounded theory is based on the systematic generating of theory from data, that itself is systematically obtained from social research (Glaser 1992, p. 2)."

"Grounded theory is a general methodology for developing theory that is grounded in data systematically gathered and analyzed. Theory evolves during actual research, and it does this through continuous interplay between analysis and data collection (Strauss and Corbin 1994, p. 273)."

Its methodological roots lie in sociology, and its philosophical ones in social constructivism and symbolic interactionalism. Reality exists in the experiences of individuals and groups, and it is the researcher’s job to try and understand that reality without interpolating too many of his or her ideas. A good account of the theoretical roots of GT can be found in Seldén (2005: pp. 114-117).

According to Gurd (2008), GT is best suited to "questions of process" – how rather than why questions – to new situations and those which require a fresh point of view. It is also good at understanding behaviour, whether of consumers or of workers in an organization. In the latter case, the research may seek to understand problematic behaviour and how the informant hopes to resolve it (Glaser (1992), quoted in Jones and Kriflik, 2006).

The distinguishing feature of GT is the systematic and iterative process of collecting and analysing data and gradually transforming it into concepts (Strauss and Corbin (1998), quoted in Elharidy et al., 2008).

Its key advantage is its ability to bridge the gap between empirical research that is not linked to theory, and theory that is not grounded in empirical research. Essentially an inductive method, it offers an alterative to deductive research, where a hypothesis is formed based on theory and then tested in the field (Elharidy et al., 2008; Geiger and Turley, 2003).

Key procedures in GT

It is difficult to sum up the principles of GT as there have been a number of changes and different adaptations in the 40 years since its first use – and herein lies one of the main difficulties in its use as a research method. However, Gurd (2008: p. 127f.) has done a study of the main writings on GT and maintains that there are four uncontested principles:

  1. iterative data collection
  2. theoretical sampling
  3. constant comparison
  4. explicit coding and theory building.

Iterative data collection

Perhaps the most obviously observable difference between GT and other forms of research is the order in which the processes of research are carried out. In some research, the processes of selecting, collecting and analysing data are linear: one cannot start before the other finishes.

In GT however, these three processes are iterative. Rather than waiting until all the data are collected, analysis begins as soon as the first bits of data are in, while further collection of data build on the first analysis.

Theoretical sampling

Another major difference between GT and other research methods is the attitude to sampling. Statistical random sampling seeks a sample of an adequate size that is representative of the population to be studied. In GT studies, however, there is no minimum or maximum sample size. Researchers will often select early samples of "key informants" who will in turn prompt useful avenues for investigation (Goulding, 2005). Later samples should proceed according to theoretical relevance (Glaser (1998), quoted in Geiger and Turley, 2003).

Rather than being representative, the samples can actually be chosen to express differences with the group hitherto explored: for example, a researcher might seek to explore different groups within an organization or a contrasting organization. (This is known as maximizing differences.) There is a deliberate attempt to seek out new data (disconfirming cases) that contradict the original theory in order to enrich theory development.

Constant comparison

The data are constantly compared to establish variations in patterns. Texts (from interviews, observations, documents, etc.) are analysed line by line and provisional themes (known as categories) noted. These themes (categories) are then compared with other data and differences and similarities are noted. When differences are noted, then either the category’s definition must be changed or a new category developed.

Explicit coding and theory building

The researcher should articulate the processes associated with data analysis: how they have coded the data and built their theory on it. This has implications for how the research is written up.