Product Information:-

  • Journals
  • Books
  • Case Studies
  • Regional information
Request a service from our experts.
Visit the JDAL journal page.

How to... use a repertory grid

Options:     Print Version - How to... use a repertory grid, part 2 Print view

Eliciting and designing a repertory grid

One feature of the repertory grid is because much is done in conjunction with the interviewee, the design of the actual grid will be a shared process.

However, you need to consider the overall purpose of the grid, in the light of your research design:

  • What is your research question?
  • How big will your sample be? (Note that many researchers use a small sample with this technique, as it is fairly time-consuming to carry out.)
  • What and how will you analyse?

All these and other questions will need to be considered carefully before you start interviewing. In particular if you are using the grid for a number of interviews (which will be most of the time, a sample of one normally being considered a bit small!) you will need to ensure a certain degree of commonality in the design.

As with other techniques, you do not use the technique in isolation and it is common to use it along with other methods.

Preparation

Before each repertory grid interview it will be important to ensure the following:

  • Availability of a quiet room with phones off the hook, so that the interview can take place in a relaxed, uninterrupted atmosphere.
  • Availability of a pre-prepared blank grid sheet with the topic in the left-hand corner, space for the elements in the top row and the constructs along the left-hand and right-hand side.
  • Explaining to the person concerned the reason for the interview and ensuring that they understand the procedure.
  • Agreement about confidentiality.

All interviewing calls for good social skills and the ability to put the interviewee at ease, and repertory grid is no exception.

Agreeing the topic

The extent to which the topic is up for negotiation depends on the situation, but in most research situations it would be chosen by the researcher. It is important that the topic is sufficiently specific, for example, not just leadership but a specific aspect, e.g. leadership and integrity. This will be helped by having a clear research question, as in the following examples.

In their 2007 article, "Articulating appraisal system effectiveness based on managerial cognitions", Wright and Cheung (Personnel Review, Vol. 36 No. 2, pp. 206-230) examine appraisal cognition using the following research questions (RQs):

RQ1. How do practising managers see, interpret and make sense of their performance management experiences? RQ2. In what way can these managerial cognitions of appraisal system experience lead to a deeper understanding of the way forward in designing more effective performance management systems?

In "Destination brand images: a business tourism perspective", Hankinson, G. (2005), Journal of Services Marketing, Vol. 19 No. 1, pp. 24-32, the researcher wanted to find the answer to three research questions connected with destinations for business tourism. The first RQ looked at the key brand image attributes used by events managers to characterize destinations, and a repertory grid was used for this purpose.

Eliciting the elements

The elements chosen must:

  • cover the topic of investigation evenly, i.e. be representative,
  • be clear, and homogenous – i.e. do not mix people and objects,
  • be free from value judgements,
  • be familiar to the interviewee,
  • be expressed as simply as possible – nouns are easiest, verbs should be active gerunds, e.g. "deciding" rather than "making decisions",
  • together, form a set – for example, avoid mixing abstract and concrete nouns, nouns and verbs, etc.,
  • be mutually exclusive and not subsets of one another, for example you could not have both "cat" and "siamese".

Elements may be either provided or elicited during the interview. In the previous section the elements in the wine example would be provided, and those for the lecturers example elicited. A common type of element is the personal element, i.e. particular people known to the interviewee. Elements are elicited by providing a general category, and then asking the interviewee to give specific examples.

In the case of provided elements, these may be deduced by looking at the literature, or by some prior research, perhaps through exploratory interviews, or some piloting of the technique.

In "Articulating appraisal system effectiveness based on managerial cognitions", Wright and Cheung came up with nine appraisal systems activities by reading the literature and consulting managers.

In "Using the repertory grid and laddering technique to determine the user's evaluative model of search engines", Journal of Documentation, Vol. 63 No. 2, pp. 259-280, Johnson and Crudge (2007) used search engines as constructs and dyadic elicitation (comparing two search engines) to obtain constructs.

In "Facilities management in medium-sized UK hotels", International Journal of Contemporary Hospitality Management, Vol. 14 No. 2, pp. 72-80, Jones (2002) used a number of elements of facilities management e.g. reception work; general cleaning; catering, etc.

In Hankinson's 2005 article, "Destination brand images: a business tourism perspective", Journal of Services Marketing, Vol. 19 No. 1, pp. 24-32, the researcher used 15 destinations as elements, put into 6 different categories, shown below:

Destinations used as elements in the repertory grids
Destination characteristics Destination
Commercial Manchester, Leeds, Bristol
Ports Southampton, Portsmouth, Liverpool
Seaside resorts Llandudno, Brighton, Eastbourne
New towns Warrington, Telford
Historic Bath, Edinburgh, York
Small, market towns Guildford

Once the elements are established, they should be set out in some way that enables easy comparison. It is common to set each element down on a card, although the interview can also be done by computer. There should be at least ten elements in order to yield a sufficient number of options.

Sometimes, an "ideal" element is introduced as a way of providing further points of comparison.

Eliciting constructs

Constructs are the data, the unit of analysis, which result from the interview and are generally elicited from the interviewee.

The method used to elicit is to compare and contrast the elements in sets of three, known as triads (or occasionally as two – dyads). Thus the interviewee is shown three elements and then asked to comment on similarities and differences: "In what way are any two of these similar, but different from the third, in terms of [link with the topic]?". The same procedure is then followed with the remaining combinations of elements. Each triad should change at least two of the elements.

The point about using comparisons is to enable opinions and ideas which are often just "below the surface" to become more explicit – in other words, to dig out and make concrete tacit knowledge.

This is the account that Wright and Cheung give of how they carried out their interviews:

"Given these representative appraisal system elements, managers were shown each of the nine appraisal activities in groups of threes called a 'Triad' and asked the Kellyian question: 'In what way are any two of these similar, but different from the third, in terms of how well or how not well they are done in your organization?'

Using E1, E2 and E3 as an example (E1: Attending appraisal training/E2: Attending the annual interview/E3: Reading appraisal guidelines and notes) – a typical response is usually in the form of a construct such as, E1 and E2 are similar because 'I can contribute' and that is why it is done well; whereas E3 is different and not well done because 'I don't get a chance to have my input'. (Hence: I can contribute –- I don't get a chance to have my input). According to Kelly (1955), a construct is always bi-polar in nature and reflects a dimension from which an individual formulates perceptions to make sense of the world. Each appraisal system element was triadically compared twice, using different combinations to elicit as many personal constructs as possible".

In "Improving team performance using repertory grids", Team Performance Management, Vol. 11 Nos. 5/6, pp. 179-187, Boyle (2005) used a repertory grid technique to research soft skills issues in team performance for programmers. The elements were programmers themselves and the comparison process is referred to as follows:

"Of the three elements selected, two elements are compared on some important similarity and the remaining element is compared on some difference between itself and the other two elements. The comparison should be based upon a question that the researcher asks, such as 'In what way are two of these people similar to each other and different from the third?'. The reason for pairing the elements in the triad is then stated. These elements should encourage the respondent to think in contrasts such as 'Liked – Disliked', and 'Personal hero – Villain'" (Fournier, V. (1996), "Cognitive maps in the analysis of personal change during work role transition", British Journal of Management, Vol. 7, pp. 87-105).

The similarities and differences referred to above form the constructs, and they are set out in the left and right hand columns of the grid (similarities on the left and differences on the right).

Qualities of a good construct are well summed up by Jankowicz (2004) as:

  • providing clear contrasts
  • appropriately detailed
  • relevant to the topic.

If the constructs are not sufficiently clear, if they contain cliches or unexplained jargon, then the interviewer needs to probe sensitively to deepen their understanding and arrive at a satisfactory construct. One technique that can be used is "laddering down"; this involves a probing question such as "What sort of thing do you have in mind...?", "Can you give me an example?", asking them to explain as if to a Martian, what such a person does, etc.

Johnson and Crudge (2007) describe the laddering process, as well as Kelly's belief that some constructs are core to a person's belief system, as follows:

"Kelly believed that construct systems are hierarchically organized and interrelated by cause and effect, with some constructs being central to the beliefs of an individual. These core concepts can be visualized as forming the topmost points of a pyramid, with the lower positions filled by the system of interrelated constructs. During laddering, the interviewer starts at any point within this system, termed the 'seed item', and using a series of probing questions the participant is guided up, down and across the (hierarchical) construct system (Rugg et al., 1999). This method is essentially a combination of Hinkle's (1965) laddering technique used to move upwards within the hierarchy, with Landfield's (1971) pyramid technique used to move downwards in the hierarchy. It has now become standard for the term 'laddering' to refer to the combined method" (Johnson and Crudge, "Using the repertory grid and laddering technique to determine the user's evaluative model of search engines", Journal of Documentation, Vol. 63 No. 2, pp. 259-280).

Johnson and Crudge also describe how they used a technique of laddering up and down, taking the probe "why is that important to you?" to move the participants higher up their pyramids, and "how is it different?" to move lower. An example of a probe statement is shown below:

Figure showing an example of a probe statement

Rating systems

The rating scale needs to be decided in advance; a 5-point scale is common. It is inadvisable to go beyond 7 points as the distinctions would be too fine. Kelly himself used a 2-point scale, which was just the construct and its opposite mainly in order to focus on meaning rather than numbers.

Wright and Cheung describe their rating system as follows:

"Once the constructs were elicited, the manager was then asked to rate each of the nine appraisal activities using a 5-point scale based on their own generated bi-polar constructs (used as semantic differentials); A rating of '1' represented elements that were closest to the left-hand side of the bi-polar construct elicited; and a rating of '5' represented elements that were best explained by the bi-polar construct pole on the right of the grid. After all of the elements were rated, the respondents were asked to choose the side of the bi-polar constructs that, in their view, represents a key attribute of an effective appraisal system ... Upon the completion of this exercise, all grid respondents' constructs and ratings were aggregated (Bougon, 1992) to generate one aggregate repertory grid ... The data was then inputted into the RepGrid programme to generate a principal component analysis, collective cognitive maps, and cluster analysis for discussion".