How to...
Implement grounded theory

The popularity of grounded theory (GT) in particular, and of qualitative methods in general, is part of a trend away from total reliance on positivism towards a more interpretive view. Part of its attraction is that in offering a structured approach to the collecting and coding of data, as well as rigorous theory building, it can refute the claim that qualitative research lacks rigour.

A problem, however, is that GT itself is often applied rather haphazardly, and used as a general term for qualitative data analysis, or as a research technique, whereas it is more properly a methodology.

Origins & features of GT

GT was first introduced in 1967 by two sociologists, Barney Glaser and Anselm Strauss. Describing it as a "general, inductive and interpretive research method", Mansourian (2008) quotes a couple of definitions from both authors:

"Grounded theory is based on the systematic generating of theory from data, that itself is systematically obtained from social research (Glaser 1992, p. 2)."

"Grounded theory is a general methodology for developing theory that is grounded in data systematically gathered and analysed. Theory evolves during actual research, and it does this through continuous interplay between analysis and data collection (Strauss and Corbin 1994, p. 273)."

Its methodological roots lie in sociology, and its philosophical ones in social constructivism and symbolic interactionalism. Reality exists in the experiences of individuals and groups, and it is the researcher’s job to try and understand that reality without interpolating too many of his or her ideas. A good account of the theoretical roots of GT can be found in Seldén (2005: pp. 114-117).

According to Gurd (2008), GT is best suited to "questions of process" – how rather than why questions – to new situations and those which require a fresh point of view. It is also good at understanding behaviour, whether of consumers or of workers in an organisation. In the latter case, the research may seek to understand problematic behaviour and how the informant hopes to resolve it (Glaser (1992), quoted in Jones and Kriflik, 2006).

The distinguishing feature of GT is the systematic and iterative process of collecting and analysing data and gradually transforming it into concepts (Strauss and Corbin (1998), quoted in Elharidy et al., 2008).

Its key advantage is its ability to bridge the gap between empirical research that is not linked to theory, and theory that is not grounded in empirical research. Essentially an inductive method, it offers an alterative to deductive research, where a hypothesis is formed based on theory and then tested in the field (Elharidy et al., 2008; Geiger and Turley, 2003).

Key procedures in GT

It is difficult to sum up the principles of GT as there have been a number of changes and different adaptations in the 40 years since its first use – and herein lies one of the main difficulties in its use as a research method. However, Gurd (2008: p. 127f.) has done a study of the main writings on GT and maintains that there are four uncontested principles:

  1. iterative data collection
  2. theoretical sampling
  3. constant comparison
  4. explicit coding and theory building.

Iterative data collection

Perhaps the most obviously observable difference between GT and other forms of research is the order in which the processes of research are carried out. In some research, the processes of selecting, collecting and analysing data are linear: one cannot start before the other finishes.

In GT however, these three processes are iterative. Rather than waiting until all the data are collected, analysis begins as soon as the first bits of data are in, while further collection of data build on the first analysis.

Theoretical sampling

Another major difference between GT and other research methods is the attitude to sampling. Statistical random sampling seeks a sample of an adequate size that is representative of the population to be studied. In GT studies, however, there is no minimum or maximum sample size. Researchers will often select early samples of "key informants" who will in turn prompt useful avenues for investigation (Goulding, 2005). Later samples should proceed according to theoretical relevance (Glaser (1998), quoted in Geiger and Turley, 2003).

Rather than being representative, the samples can actually be chosen to express differences with the group hitherto explored: for example, a researcher might seek to explore different groups within an organisation or a contrasting organisation. (This is known as maximising differences.) There is a deliberate attempt to seek out new data (disconfirming cases) that contradict the original theory in order to enrich theory development.

Constant comparison

The data are constantly compared to establish variations in patterns. Texts (from interviews, observations, documents, etc.) are analysed line by line and provisional themes (known as categories) noted. These themes (categories) are then compared with other data and differences and similarities are noted. When differences are noted, then either the category’s definition must be changed or a new category developed.

Explicit coding and theory building

The researcher should articulate the processes associated with data analysis: how they have coded the data and built their theory on it. This has implications for how the research is written up.

Development of GT

It is not possible to understand GT without knowing how it developed, in particular the rift that emerged between its two original protagonists.

GT started with Glaser and Strauss’s 1967 publication of The Discovery of Grounded Theory: Strategies for Qualitative Research. This was followed up by a number of subsequent publications, which attempted to provide greater clarity about method.

The rift between Glaser and Strauss emerged when the latter collaborated with a nursing researcher, Juliet Corbin, to produce a more clearly defined system (Gurd, 2008). Their approach to data was much more structured and codified, while that of Glaser insists on the emergent nature of theory development in data analysis and coding.

While Gurd (2008) distinguished four different schools of GT, we shall here confine ourselves to the two main ones, of which the protagonists are Glaser and Strauss and Corbin respectively.

Table I, below, is a summary of the main differences between the two approaches:

Table I: Glaser vs Strauss
  Glaser Strauss
Theory development Theory emerges by a detailed process of coding leading to "theory saturation", in other words, it is purely inductive (Elharidy et al., 2008).
The product of GT is a set of integrated conceptual hypotheses, organised around a core category, generated from systematic research methodology (Glaser (2003: p. 2), quoted in Jones and Noble, 2007). Data for description or proof are not required
Rigorous coding leads to verification and also to the ability to generalise beyond the immediate study (Elharidy et al., 2008).
Elsewhere, however, they imply that GT can also develop purely descriptive accounts (Jones and Noble, 2007)
Emergence of research question and use of literature As far as possible, the researcher should come to the study without preconceptions, with only a broad topic area in mind and without detailed research questions. The literature is not studied beforehand. The researcher should be totally neutral and allow both topic and understanding to emerge The researcher should be encouraged to use prior experience (both professional and personal) and knowledge. The literature will be a good source not only of ideas, but also possibly data (Strauss and Corbin, 1998)
Use of procedures Glaser is insistent that a set of procedures must be followed during data analysis and coding (Jones and Noble, 2007) Later Strauss and Corbin implies that these are optional (Jones and Noble, 2007)
Coding Coding should be as open as possible, with as many categories as possible being used until the core category emerges (Jones and Noble, 2007) As Glaser sees it, more attempt to "force" coding (Jones and Noble, 2007)
How to use GT Defines GT (1992: p. 16) as "a general methodology of analysis linked with data collection and uses a systematically applied set of methods to generate an inductive theory about a substantive area" [emphasis added]. GT is a research design, not just a technique Described (1990: p. 24) as a "qualitative research method that uses a systematic set of procedures to develop and inductively derive grounded theory about a phenomenon" [emphasis added]. In other words, it is a research technique systematically followed

It should be emphasised that Glaser does not dismiss Strauss and Corbin’s approach, but rather, denies that it is still GT (Geiger and Turley, 2003).

Table II, following, is Jones and Noble’s description of the differences between the two schools (2007: p. 93).

Table II: Jones and Noble's description of the differences between Glaser and Strauss
  Glaserian school Straussian school
Emergence and researcher distance Everything emerges in a grounded theory – nothing is forced or preconceived. Researchers are distant and unknowing as they approach the data, with only the world under study shaping the theorising 1987, 1990, 1998: the researcher adopts a more active and provocative influence over the data, using cumulative knowledge and experience to enhance sensitivity. Logical elaboration and preconceived tools and techniques can be employed to shape the theorising
Development of theory The goal is to generate a conceptual theory that accounts for a pattern of behaviour which is relevant and problematic for those involved

1987: conceptually dense, integrated theory development is the only legitimate outcome.
1990, 1998: grounded theory can also be used for developing non-theory (conceptual ordering or elaborate description)

Specific, non-optional procedures The method involves clear, extensive, rigorous procedures and a set of fundamental processes that must be followed

1987: grounded theory encompasses a number of distinct processes that must be carried out.
1990, 1998: researchers can cherry-pick from a smorgasbord table, from which they can choose, reject or ignore

Core category The theoretical formulation that represents the continual resolving of the main concern of the participants 1987, 1990, 1998: the main theme of a predetermined phenomenon which integrates all the other categories and explains the various actions and interactions that are aimed at management or handling the relevant event, happening or incident
Coding Open, selective and theoretical

Open, axial and selective, but with the following variations:
1987: selective coding is an "emergent" process based on continuous use of memo sorting and integrative diagrams.
1990: selective coding employs the "forcing" mechanism of the coding paradigm.
1998: paradigm model dropped and an emergent process based on memo sorting is again stressed

How to proceed with GT

As seen from the brief summary of the two schools in Table II, there are subtle variations in both orientation and procedure, and anyone using GT seriously should immerse themselves in the literature about the approaches for a more detailed account than is possible in a short article. Jones and Noble’s 2007 critique of GT’s use in management research provides a good introduction.

The key difference between any sort of GT and other types of management research is the speed with which the researcher immerses him or herself in the data. The initial stages of selecting the question and designing the research will inevitably be both shorter and iterative, as other questions and other approaches are thrown up during the course of the research.

However, Partington (2002: pp. 138-143) advocates that before data collection the researcher needs to look at four fundamental issues:

  1. What is the general purpose of the research? For example, are you attempting to explore, describe, understand, explain or change a particular phenomenon?
  2. What is/are the research questions?
  3. What is your theoretical perspective?
  4. What is the design of the research? Whereas for other types of research, the design is clearly thought out at the beginning, that for GT tends to be "sketchy and incomplete" (Partington, 2002: p. 143) because of the use of theoretical sampling (see above) and the way the data collection strategy is driven by ideas.

Another distinctive feature of GT is the way in which the data analysed can come from a variety of sources, quantitative as well as qualitative: interviews, observation, documents, or even surveys (Geiger and Turley, 2003).

Daengbuppha et al. (2006) used mainly in-depth ethnographic interview, participant and non-participant observation for their study of heritage sites, but they also used published material and archive data.

Pauleen et al. (2001) used semi-structured interviews and discussions with the participants, the researcher’s journal, participants' notes, organisational documentation, and e-mails, for an action research study.

Macri et al. (2002) conducted a grounded study of change in a small organisation, with participant observation as their main method, but they also used interviews and documents relating to the region’s economy, the organisation’s competitors, internal documents such as memos, calls for meetings, meeting excerpts, selling and delivery records, and consultants’ reports, as well as the company balance sheet.

Data collection, coding and analysis

This is clearly the heart of GT, and should be started as soon as possible. The processes of collection, coding and analysis are iterative and so as soon as the researcher has data, he or she should start coding and analysis.

If using interviews, the questioning technique should be as open as possible, to allow for the participant’s experience to emerge undiluted.

If using case studies, the number should be kept small given the practicalities of applying such an in-depth approach.

Partington (2002) recommends recording all interviews and transcribing them oneself, to ensure maximum familiarity with the data, as well as to allow the opportunity for reflection on interviewing style. Coding software can be used, although many researchers prefer manual analytical techniques such as cards, sticky notes, etc., and having to learn a new software package can be an added distraction for a researcher new to GT.

The first task of coding is to break the data into small pieces (analysis is usually line by line), noting the concepts that emerge, which are then provisionally sorted into categories. At this stage, open coding is normally followed: the categories should emerge purely from the data, and the researcher should keep an open mind.

Wastell (2001) conducted 26 interviews with the questions "Tell me what leadership means to you", and "How do you feel about your job?". The initial four interviews generated 51 concept cards which were sorted into nine categories. The main category was "subordinate conduct" with two subcategories, "leader influence" and "subordinate attributes".

Douglas (2006) conducted research into managerial decision making and found 15 categories with a total of 200 properties. From this, two central constructs evolved: "management decisions and consequences – employees' perspectives" and "self as manager – manager’s perspective". The former were conceptually categorised as “growth decisions”, “process decisions” and “reaction decisions”.

Categories can be further divided into distinguishing properties:

In Wastell’s (2001) example, the properties of leader influence were: leader contact, information sharing, relationship work, and taking advantage; those of subordinate attributes were: dealing with incidents, making excuses, opinion sharing and taking advantage.

Dimensions are the spectrum within which a property operates, for example leader contact could have the dimension frequent/infrequent.

The process of constant comparison, one of the uncontested principles of GT, occurs throughout.

Sternquist and Zhengyi (2006) describe how they first categorised the first interview, and then read the following interviews to assess whether or not the coding fitted the categories. If it didn't, then either new categories were added or they broadened the existing category. This process was continued until no new categories emerged.

Categories are also compared with one another in order to determine relationships and possible groupings. This is known as axial coding, a process "whereby the provisional categories [are] examined and compared with each other to identify any natural groupings that existed" (Goddard, 2004: p. 549).

Bakir and Bakir (2006) describe how they conducted axial coding in their study of strategy in leisure organisations.

We analysed each category, using the axial coding procedure of the “coding paradigm” (Strauss, 1994, pp. 27-8), where concepts that relate to a category are classified as that category's properties, context, causal conditions, intervening conditions, actions/interactions or outcomes/consequences. This resulted in cumulative knowledge about relationships within the category and between categories.

Selective coding is the relationship between categories and the core category: what Geiger and Turley (2003) call the "storyline".

Geiger and Turley (2003) conducted research on client knowledge of salespeople. They used in-depth interviews and observations of sales calls and team meetings. Initially they coded with a completely open mind, and far more emerged about relationships than knowledge. The next stage was axial coding, when researchers focused on "discovering higher order connections between categories". The key concept was discovered to be "relationship type".

At the selective coding stage, they looked for the data’s storyline and any gaps. These gaps prompted further fieldwork. For example, they wanted to explore relational differences between service and product industries, and the impact of relationship type (for example sectors where the hard sell is the norm).

Saturation occurs once all categories can be subsumed into a core category, and all data fit the emerging theory. The core category is described by Seldén (2005: p. 118) as "a nucleus, a core category, a model, or a central category for others to circle around".

It is very important to avoid closing the research too early and to look for instances which don’t fit the theory.

In Wastell’s (2001) example, the researchers realised after the first four interviews that they were not focusing sufficiently on behavioural issues, so that further questioning and subsequent coding reflected the perspective that subordinates wanted to be liberated from organisational constraints and be more creative. A further category of "facilitative leadership behaviour" was developed to capture the concept that subordinates can feel "limited", "overloaded" or "unleashed". By the 16th interview they had reached saturation point on "‘limited" and "unleashed" but required more work on "overloaded".

Analysis is not, however, done purely by data coding; memos are important tools used to record the researcher’s thoughts about the data as they occur, while diagrams can be a useful way of showing relationships between concepts. Both should be dated and coded in some way so as to link in with the data. They form an important part of the final theory building and the process of integrating the categories into an overall theory.

The essential task of writing the theory is to make the research coherent and comprehensible to others, giving the explanation a structure (Mansourian, 2006). There should be some sort of "audit trail" so the reader knows how conclusions were arrived at.

The following example, quoted directly from Goddard’s (2004) account of his research on local government accountancy practices, provides a useful and detailed case study of data analysis in practice, drawing on the Strauss and Corbin tradition.

"Using Strauss and Corbin's (1990) approach, data was analysed through various stages of coding to produce an ordered data set which was integrated into a theory. Open coding of interview, document and observational data commenced with the identification of provisional categories, which are, “the early conceptual names assigned to data fragments” (Locke, 2001). These categories were identified partly from a line by line analysis of interviews and notes of meeting attendance. A paragraph level analysis and inspection was also undertaken of documentary evidence such as minutes of meetings, organisational reports and manuals and other organisational publications. The emerging categories were compared between interviewees and other sources within each case study to ensure internal consistency. A separate set of categories was initially developed for each case study. A comparison was then undertaken between case studies to provide a set of broader, yet grounded, categories.

The process of axial coding followed, whereby the provisional categories were examined and compared with each other to identify any natural groupings that existed. The groupings which were derived solely from immersion in the data and captured substantive aspects of the research situation are termed substantive categories. They represented another level of conceptual generality in the data and were allocated appropriate labels. Other groupings were derived from the researchers' own disciplinary sensibilities that introduced sociological and organisational meaning to the data. These are termed theoretical categories … Extensive use was made of theoretical memos and diagrams to identify theoretical implications of categories and of the relationships between them. Such memos were also used to develop the final grounded theory. It should be stressed that the above analyses were undertaken iteratively rather than merely sequentially. The validity of categories and groupings was achieved partly by the process of theoretical sampling whereby data was collected from a variety of sources as outlined above and compared within and between cases, and partly by the process whereby emergent findings were fed back to new interviewees for validation, until theoretical saturation was achieved. This was the point where new sources of data no longer provided new information either in terms of identifying new categories or refining existing categories or in the relationship between categories.

The final procedure was the process of selective coding. This requires the selection of the focal core category, that is, the central phenomenon which has emerged from the axial coding process. All other categories derived from the axial coding process must be related in some way to these focal core codes, either directly or indirectly." (Goddard, 2004: p. 549).

Goddard goes on to explain how the selective coding process was achieved by the use of Strauss and Corbin’s "paradigm model", which is a way of analysing the core concept.

Application of GT to management research

Jones and Noble (2007) attribute the growing popularity of GT in management studies to three factors:

  1. its ability to throw new light on old theory
  2. its relevance to practitioners
  3. its usefulness in uncovering micro-management issues.

Others have commented (for example, Elharidy et al., 2008) on its ability to support the study of actors in their everyday world.

We shall now look briefly at its use in various management disciplines, as evidenced by articles from the Emerald database.

Accounting

The growth of qualitative research in accounting has seen the launch of Emerald's new journal, Qualitative Research in Accounting & Management. Gurd (2008) maintains that it is the dominant approach for qualitative field studies in accounting, and that the usual home for such studies is Accounting, Auditing & Accountability Journal.

A consistent problem, however, has been that accountancy researchers do not understand the way in which GT has developed, partly because they are borrowing the methodology from another discipline. Greenhalgh’s 2000 study is quoted by Gurd (2008) as an example of a good write-up of the theory.

Elharidy et al. (2008) consider GT to be particular suitable for interpretive management accounting research (IMAR), which looks at the practices rather than the principles of management accounting:

"GT is consistent with IMAR in its emphasis on developing theory from data, the importance given to ‘local voices’, and its emphasis on explaining interactions between participants in the field".
(Elharidy et al., 2008: p. 143).

Elharidy et al. also provide a table comparing the features of IMAR and GT (2008: p. 149). They point out that most accounting researchers prefer the more structured, Strauss and Corbin approach. Goddard (2004), whose methodology is described above, used GT (Strauss and Corbin) for his study of UK local government.

Sales and marketing

Geiger and Turley (2003) maintain that the social psychological approach of looking at the behaviour of "living actors" and their interactions makes it a suitable tool for analysing relationship selling and marketing.

Goulding (2005) compares the use in marketing research of three different qualitative methods – grounded theory, ethnography and phenomenology – and concludes that use of GT is particularly appropriate where behavioural issues are being studied, as in green, ethical or social marketing. Sternquist and Zhengyi (2006) apply GT to the creation of a product decision process model in China.

Organisational behaviour

The peculiar capacity of GT to capture both the individual’s view and his or her interactions with others has made this approach popular with studies that look at the way in which people behave in organisations.

Lakshman (2007) uses grounded theory to illustrate the role of leaders in information and knowledge management, applauding its longitudinal and processual approach. Douglas (2006) uses it to study management decision making. Macri et al. (2002) use it to explore resistance to change. Bakir and Bakir (2006) use it as a tool to look at managers’ understanding of strategy.

Leisure and tourism

Leisure and tourism have a strong behavioural element which makes GT an obvious research technique. It has been used to create a model of visitor experience of heritage sites in Thailand (Daengbuppha et al., 2006).

Library and information science (LIS)

Mansourian (2006) and Seldén (2005) have both studied the use of GT in library and information science (LIS). Mansourian quotes Powell (1999) as maintaining that LIS lacks "well founded theories", so GT is an obvious choice because of its ability to construct theory.

Both authors trace the use of GT back to researchers at Sheffield university, and in particular Ellis’ 1987 doctoral thesis for that university. GT has been popular with other doctoral students and has also been employed for user studies and various aspects of online learning (Mansourian, 2006).

Limitations of GT

It can be seen from the examples cited earlier that GT has a great deal to offer, particularly with regard to the study of human behaviour and people in their natural environment. There are problems, however; the research can take considerable time and effort, and it can be difficult to predict the end, thus causing budgetary problems.

The most obvious limitation of GT is that – particularly in a discipline which has "borrowed" the approach from sociology, and therefore has not been close to, or part of, its continuing evolution – management researchers are unaware of the different schools, and therefore use the approach incorrectly and tend to cherry-pick.

This claim is made by quite a few writers, but is articulated most extensively by Jones and Noble (2007), who analyse 32 empirical GT studies in the management literature since 2002.

They show that many of the studies ignore fundamental tenets of GT: six make no reference to theoretical sampling, only four use axial and selective coding, and there is patchy use of constant comparison, memoing and theoretical saturation.

Thus GT is, Jones and Noble believe, in danger of losing its integrity. They offer three suggestions to reverse this trend:

  1. The researcher should state the school of GT to which he or she belongs.
  2. The objective should always be to generate a core category.
  3. A set of core procedures should be followed, which were endorsed by both Glaser and Strauss:
    – simultaneous collection, coding and analysis of data;
    – theoretical sampling;
    – constant comparison;
    – category and property development;
    – systematic coding, memoing, saturation and sorting.

Seldén (2005) goes beyond Jones and Noble to criticise GT itself, and in particular Glaser’s insistence on the absence of pre-understanding, which he contests (p. 123) could lead to the possibility of duplicating existing work. He also doubts whether it is possible for research to generate purely from the data. His views may be singular, although his article provides a very useful reflection on his use of GT.

Conclusion & references

Conclusion

GT is a powerful qualitative research approach which has the capacity to see people as living actors in their everyday environment. As such, it is a particularly useful tool for any study which observes, and tries to account for, human behaviour. Because much of management is about how people behave in complex organisations, it can be a valuable tool for management research.

It can also provide a robust way of handling data, and developing theory. The flaws noted by Seldén (above) have also been noted, and to some extent addressed, by the development of GT since its first use. Any researcher using GT, however, would do well to follow Jones and Noble’s advice about procedure, and being explicit about the particular school they follow.

References

Bakir, A. and Bakir, V. (2006), "Unpacking complexity, pinning down the 'elusiveness' of strategy: A grounded theory study in leisure and cultural organizations", Qualitative Research in Organizations and Management: An International Journal, Vol. 1 No. 3, pp. 152-172.

Daengbuppha, J., Hemmington, N. and Wilkes, K. (2006), "Using grounded theory to model visitor experiences at heritage sites: Methodological and practical issues", Qualitative Market Research: An International Journal, Vol. 9 No. 4, pp. 367-388.

Douglas, D. (2006), "Intransivities of managerial decisions: a grounded theory case", Management Decision, Vol. 44 No. 2, pp. 259-275.

Elharidy, A.M., Nicholson, B. and Scapens, R.W. (2008), "Using grounded theory in interpretive management accounting research", Qualitative Research in Accounting & Management, Vol. 5 No. 2, pp. 139-155.

Ellis, D. (1987), “The derivation of behavioural model for information retrieval system design”, PhD thesis, Department of Information Studies, University of Sheffield, Sheffield.

Geiger, S. and Turley, D. (2003), "Grounded theory in sales research: an investigation of salespeople’s client relationships", Journal of Business & Industrial Marketing, Vol. 18 Nos. 6/7, pp. 580-594.

Goddard, A. (2004), "Budgetary practices and accountability habitus: A grounded theory", Accounting, Auditing & Accountability Journal, Vol. 17 No. 4, pp. 543-577.

Goulding, C. (2005), "Grounded theory, ethnography and phenomenology: A comparative analysis of three qualitative strategies for marketing research", European Journal of Marketing, Vol. 39 Nos. 3/4, pp. 294-308.

Gurd, B. (2008), "Remaining consistent with method? An analysis of grounded theory research in accounting", Qualitative Research in Accounting & Management, Vol. 5 No. 2, pp. 122-138.

Jones, R. and Kriflik, G. (2006), "Subordinate expectations of leadership within a cleaned-up bureaucracy: A grounded theory study", Journal of Organizational Change Management, Vol. 19 No. 2, pp. 154-172.

Jones, R. and Noble, G. (2007), "Grounded theory and management research: a lack of integrity?", Qualitative Research in Organizations and Management: An International Journal, Vol. 2 No. 2, pp. 84-103.

Lakshman, C. (2007), "Organizational knowledge leadership: a grounded theory approach", Leadership & Organization Development Journal, Vol. 28 No. 1, pp. 51-75.

Macrì, D.M., Tagliaventi, M.R. and Bertolotti, F. (2002), "A grounded theory for resistance to change in a small organization", Journal of Organizational Change Management, Vol. 15 No. 3, pp. 292-310.

Mansourian, Y. (2006), "Adoption of grounded theory in LIS research", New Library World, Vol. 107 Nos. 9/10, pp. 386-402.

McKnight, M. (2007), "A grounded theory model of on-duty critical care nurses' information behavior: The patient-chart cycle of informative interactions", Journal of Documentation, Vol. 63 No. 1, pp. 57-73.

Partington, D. (Ed) (2002), "Grounded theory" in Essential Skills for Management Research, Sage Publications, London.

Pauleen, P.J., Corbitt, B. and Yoong, P. (2007), "Discovering and articulating what is not yet known: Using action learning and grounded theory as a knowledge management strategy", The Learning Organization, Vol. 14 No. 3, pp. 222-240.

Seldén, L. (2005), "On Grounded theory – with some malice", Journal of Documentation, Vol. 61 No. 1, pp. 114-129.

Sternquist, B. and Zhengyi, C. (2006), "Food retail buyer behaviour in the People's Republic of China: a grounded theory model", Qualitative Market Research: An International Journal, Vol. 9 No. 3, pp. 243-265.

Wastell, D.G. (2001), "Barriers to effective knowledge management: Action research meets grounded theory", Journal of Systems and Information Technology, Vol. 5 No. 2, pp. 21-36.

Main textbooks on GT

Glaser, B. (1978), Theoretical Sensitivity: Advances in the Methodology of Grounded Theory, Sociology Press, Mill Valley, CA.

Glaser, B. (1992), Basics of Grounded Theory Analysis. Emergence vs Forcing, Sociology Press, Mill Valley, CA.

Glaser, B. (1998), Doing Grounded Theory: Issues and Discussions, Sociology Press, Mill Valley, CA.

Glaser, B. and Strauss, A. (1967), The Discovery of Grounded Theory: Strategies for Qualitative Research, Aldine Publishing Co., Chicago, IL.

Strauss, A.L. (1987), Qualitative Analysis for Social Scientists, Cambridge University Press, Cambridge.

Strauss, A.L. and Corbin, J. (1990), Basics of Qualitative Research: Grounded Theory Procedures and Techniques, Sage, Thousand Oaks, CA.

Strauss, A.L. and Corbin, J. (1998), Basics of Qualitative Research: Techniques and Procedures for Developing Theory, 2nd ed., Sage, Thousand Oaks, CA.