Design a research study
The design of a piece of research refers to the practical way in which the research was conducted according to a systematic attempt to generate evidence to answer the research question. The term "research methodology" is often used to mean something similar, however different writers use both terms in slightly different ways: some writers, for example, use the term "methodology" to describe the tools used for data collection, which others (more properly) refer to as methods.
What is research design?
The following are some definitions of research design by researchers:
Design is the deliberately planned 'arrangement of conditions for analysis and collection of data in a manner that aims to combine relevance to the research purpose with economy of procedure'.
Selltiz C.S., Wrightsman L.S. and Cook S.W. 1981 Research Methods in Social Relations,
Holt, Rinehart & Winston, London, quoted in Jankowicz, A.D., Business Research Methods, Thomson Learning, p.190.)
The idea behind a design is that different kinds of issues logically demand different kinds of data-gathering arrangement so that the data will be:
- relevant to your thesis or the argument you wish to present;
- an adequate test of your thesis (i.e. unbiased and reliable);
- accurate in establishing causality, in situations where you wish to go beyond description to provide explanations for whatever is happening around you;
- capable of providing findings that can be generalised to situations other than those of your immediate organisation.
(Jankowicz, A.D., Business Research Methods , Thomson Learning, p. 190)
The design of the research involves consideration of the best method of collecting data to provide a relevant and accurate test of your thesis, one that can establish causality if required (see What type of study are you undertaking?), and one that will enable you to generalise your findings.
Design of the research should take account of the following factors, which are briefly discussed below with links to subsequent pages or other parts of the site where there is fuller information.
What is your theoretical and epistemological perspective?
Although management research is much concerned with observation of humans and their behaviour, to a certain extent the epistemological framework derives from that of science. Positivism assumes the independent existence of measurable facts in the social world, and researchers who assume this perspective will want to have a fairly exact system of measurement. On the other hand, interpretivism assumes that humans interpret events and researchers employing this method will adopt a more subjective approach.
What type of study are you undertaking?
Are you conducting an exploratory study, obtaining an initial grasp of a phenomenon, a descriptive study, providing a profile of a topic or institution:
Karin Klenke provides an exploratory study of issues of gender in management decisions in Gender influences in decision-making processes in top management teams (Management Decision, Volume 41 Number 10)
Damien McLoughlin provides a descriptive study of action learning as a case study in There can be no learning without action and no action without learning in (European Journal of Marketing, Volume 38 Number 3/4)
Or it can be explanatory, examining the causal relationship between variables: this can include the testing of hypotheses or examination of causes:
Martin et al. examined ad zipping and repetition in Remote control marketing: how ad fast-forwarding and ad repetition affect consumers (Marketing Intelligence & Planning, Volume 20 Number 1) with a number of hypotheses e.g. that people are more likely to remember an ad that they have seen repeatedly.
What is your research question?
The most important issue here is that the design you use should be appropriate to your initial question. Implicit within your question will be issues of size, breadth, relationship between variables, how easy is it to measure variables etc.
The two different questions below call for very different types of design:
The example Dimensions of library anxiety and social interdependence: implications for library services (Jiao and Onwuegbuzie, Library Review, Volume 51 Number 2) looks at attitudes and the relationship between variables, and uses very precise measurement instruments in the form of two questionnaires, with 43 and 22 items respectively.
In the example Equity in Corporate Co-branding (Judy Motion et al., European Journal of Marketing, Volume 37 Number 7), the RQs posit a need to describe rather than to link variables, and the methodology used is one of discourse theory, which involves looking at material within the context of its use by the company.
What sample size will you base your data on?
The sample is the source of your data, and it is important to decide how you are going to select it.
See Sampling techniques.
What research methods will you use and why?
We referred above to the distinction between methods and methodology. There are two main approaches to methodology – qualitative and quantitative.
|Quantitative methods:||Qualitative methods:|
|typically use numbers||typically use words|
|are deductive||are inductive|
|involve the researcher as ideally an objective, impartial observer||require more participation and involvement on the part of the researcher.|
|may focus on cause and effect||focuses on understanding of phenomena in their social, institutional, political and economic context|
|require a hypothesis||do not require a hypothesis|
|have the drawback that they may force people into categories, also it cannot go into much depth about subjects and issues.||have the drawback that they focus on a few individuals, and may therefore be difficult to generalise.|
For more detail on each of the approaches, Quantitative approaches to design and Qualitative approaches to design later in this feature.
Note, you do not have to stick to one methodology (although some writers recommend that you do). Combining methodologies is a matter of seeing which part of the design of your research is better suited to which methodology.
How will you triangulate your research?
Triangulation refers to the process of ensuring that any defects in a particular methodology are compensated by use of another at appropriate points in the design. For example, if you carry out a quantitative survey and need more in depth information about particular aspects of the survey you may decide to use in-depth interviews, a qualitative method.
Here are a couple of useful articles to read which cover the issue of triangulation:
- Combining quantitative and qualitative methodologies in logistics research by John Mangan, Chandra Lalwani and Bernard Gardner (International Journal of Physical Distribution & Logistics Management, Volume 34 Number 7) looks at ways of combining methodologies in a particular area of research, but much of what they say is generally applicable.
- Quantitative and qualitative research in the built environment: application of "mixed" research approach by Dilanthi Amaratunga, David Baldry, Marjan Sarshar and Rita Newton (Work Study, Volume 51 Number 1) looks at the relative merits of the two research approaches, and despite reference to the built environment in the title acts as a very good introduction to quantitative and qualitative methodology and their relative research literatures. The section on triangulation comes under the heading 'The mixed (or balanced) approach'.
What steps will you take to ensure that your research is ethical?
Ethics in research is a very important issue. You should design the research in such a way that you take account of such ethical issues as:
- informed consent (have the participants had the nature of the research explained to them)?
- checking whether you have permission to transcribe conversations with a tape recorder
- always treating people with respect, consideration and concern.
How will you ensure the reliability of your research?
This is about the replicability of your research and the accuracy of the procedures and research techniques. Will the same results be repeated if the research is repeated? Are the measurements of the research methods accurate and consistent? Could they be used in other similar contexts with equivalent results? Would the same results be achieved by another researcher using the same instruments? Is the research free from error or bias on the part of the researcher, or the participants? (E.g. do the participants say what they believe the management, or the researcher, wants? For example, in a survey done on some course material, that on a mathematical module received glowing reports – which led the researcher to wonder whether this was anything to do with the author being the Head of Department!)
How successfully has the research actually achieved what it set out to achieve? Can the results of the study be transferred to other situations? Does x really cause y, in other words is the researcher correct in maintaining a causal link between these two variables? Is the research design sufficiently rigorous, have alternative explanations been considered? Have the findings really be accurately interpreted? Have other events intervened which might impact on the study, e.g. a large scale redundancy programme? (For example, in an evaluation of the use of CDs for self study with a world-wide group of students, it was established that some groups had not had sufficient explanation from the tutors as to how to use the CD. This could have affected their rather negative views.)
Are the findings applicable in other research settings? Can a theory be developed that can apply to other populations? For example, can a particular study about dissatisfaction amongst lecturers in a particular university be applied generally? This is particularly applicable to research which has a relatively wide sample, as in a questionnaire, or which adopts a scientific technique, as with the experiment.
Can the research be applied to other situations? Particularly relevant when applied to case studies.
In addition, each of the sections in this feature on quantitative and qualitative approaches to research design contain notes on how to ensure that the research is reliable.
Some basic definitions
In order to answer a particular research question, the researcher needs to investigate a particular area or group, to which the conclusions from the research will apply. The former may comprise a geographical location such as a city, an industry (for example the clothing industry), an organisation/group of organisations such as a particular firm/type of firm, a particular group of people defined by occupation (e.g. student, manager etc.), consumption of a particular product or service (e.g. users of a shopping mall, new library system etc.), gender etc. This group is termed the research population.
The unit of analysis is the level at which the data is aggregated: for example, it could be a study of individuals as in a study of women managers, of dyads, as in a study of mentor/mentee relationships, of groups (as in studies of departments in an organisation), of organisations, or of industries.
Unless the research population is very small, we need to study a subset of it, which needs to be general enough to be applicable to the whole. This is known as a sample, and the selection of components of the sample that will give a representative view of the whole is known as sampling technique . It is from this sample that you will collect your data.
In order to draw up a sample, you need first to identify the total number of people in the research population. This information may be available in a telephone directory, a list of company members, or a list of companies in the area. It is known as a sampling frame.
In Networking for female managers' career development (Margaret Linehan, Journal of Management Development, Volume 20 Number 10), he sampling technique is described as follows:
"A total of 50 senior female managers were selected for inclusion in this study. Two sources were used for targeting interviewees, the first was a listing of Fortune 500 top companies in England, Belgium, France and Germany, and, second, The Marketing Guide to Ireland. The 50 managers who participated in the study were representative of a broad range of industries and service sectors including: mining, software engineering, pharmaceutical manufacturing, financial services, car manufacturing, tourism, oil refining, medical and state-owned enterprises."
Sampling may be done either a probability or a non-probability basis. This is an important research design decision, and one which will depend on such factors as whether the theory behind the research is positivist or idealist, whether qualitative or quantitative methods are used etc. Note that the two methods are not mutually exclusive, and may be used for different purposes at different points in the research, say purposive sampling to find out key attitudes, followed by a more general, random approach.
Note that there is a very good section from an online textbook on sampling: see William Trochim's Research Methods Knowledge Base.
In probability sampling, each member of a given research population has an equal chance of being selected. It involves, literally, the selection of respondents at random from the sampling frame, having decided on the sample size. This type of sampling is more likely if the theoretical orientation of the research is positivist, and the methodology used is likely to be quantitative.
Probability sampling can be:
- random – the selection is completely arbitrary, and a given number of the total population is selected completely at random.
- systematic – every nth element of the population is selected. This can cause a problem if the interval of selection means that the elements share a characteristic: for example, if every fourth seat of a coach is selected it is likely that all the seats will be beside a window.
- stratified random – the population is divided into segments, for example, in a University, you could divide the population into academic, administrators, and academic related (related professional staff). A random number of each group is then selected. It has the advantage of allowing you to categorise your population according to particular features. A.D. Jankowicz provides useful advice (Business Research Methods,Thomson Learning, 2000, p.197).
The concept of fit in services flexibility and research: an empirical approach (Antonio J Verdú-Jover et al., International Journal of Service Industry Management, Volume 15 Number 5) uses stratified sampling: the study concentrates on three sectors within the EU, chemicals, electronics and vehicles, with the sample being stratified within this sector.
- cluster – a particular subgroup is chosen at random. The subgroup may be based on a particular geographical area, say you may decide to sample particular areas of the country.
Non probability sampling
Here, the population does not have an equal chance of being selected; instead, selection happens according to some factor such as:
- convenience/accidental – being present at a particular time e.g. at lunch in the canteen. This is an easy way of getting a sample, but may not be strictly accurate, because the factor you have chosen is based on your convenience rather than on a true understanding of the characteristics of the sample.
In "Saying is one thing; doing is another": the role of observation in marketing research (Qualitative Market Research: An International Journal, Volume 2 Number 1), Matthews and Boote use a two-stage sampling process, with convenience sampling followed by time sampling: see their methodology.
- purposive – people can be selected deliberately because their views are relevant to the issue concerned. However the drawback of the technique is its subjectivity – your view of your selection criteria may change over the duration of your research. Use can be made of:
- "key informant technique" – i.e. people with specialist knowledge
- using people at selected points in the organisational hierarchy
- snowball, with one person being approached and then suggesting others.
In "The benefits of the implementation of the ISO 9000 standard: empirical research in 288 Spanish companies", a sample was selected based on all certified companies in a particular area, because this was where the highest number of certified companies could be found.
- quota – the assumption is made that there are subgroups in the population, and a quota of respondents is chosen to reflect this diversity. This subgroup should be reasonably representative of the whole, but care should be taken in drawing conclusions for the whole population. For example, a quota sample taken in New York State would not be representative of the whole of the United States.
Monitoring consumer confidence in food safety: an exploratory study, de Jonge et al. use quota sampling using age, gender, household size and region as selection variables in a food safety survey. Read about the methodology under Materials and methods.
Non probability sampling methods are more likely to be used in qualitative research, with the greater degree of collaboration with the respondents affording the opportunity of greater detail of data gathering. The researcher is more likely to be involved in the process and be adopting an interpretivist theoretical stance.
Calculating the sample size
In purposive sampling, this will be determined by judgement; in other more random types of sample it is calculated as a proportion of the sampling frame, the key criterion being to ensure that it is representative of the whole. (E.g. 10 per cent is fine for a large population, say over 1000, but for a small population you would want a larger proportion.)
If you are using stratified sampling you may need to adjust your strata and collapse into smaller strata if you find that some of your sample sizes are too small.
The response rate
It is important to keep track of the response rate against your sample frame. If you are depending on postal questionnaires, you will need to plan into your design time to follow up the questionnaires. What is considered to be a good response rate varies according to the type of survey: if you are, say, surveying managers, then a good response would be 50 per cent; for consumer surveys, the response rate is likely to be lower, say 10 to 20 per cent.
Quantitative approaches to research design
The thing that characterises quantitative research is that it is objective. The assumption is that facts exist totally independently and the researcher is a totally objective observer of situations, and has no power to influence them. At such, it probably starts from a positivist or empiricist position.
The research design is based on one iteration in collection of the data: the categories are isolated prior to the study, and the design is planned out and generally not changed during the study (as it may be in qualitative research).
What is my research question? What variables am I interested in exploring?
It is usual to start your research by carrying out a literature review, which should help you formulate a research question.
Part of the task of the above is to help you determine what variables you are considering. What are the key variables for your research and what is the relationship between them – are you looking to explore issues, to compare two variables or to look at cause and effect?
The Dutch heart health community intervention "Hartslag Limburg": evaluation design and baseline data (Gaby Ronda et al., Health Education, Volume 103 Number 6) describes a trial of a cardiovascular prevention programme which indicated the importance of its further implementation. The key variables are the types of health related behaviours which affect a person's chance of heart disease.
The following studies compare variables:
Service failures away from home: benefits in intercultural service encounters (Clyde A Warden et al., International Journal of Service Industry Management, Volume 14 Number 4) compares service encounters (the independent variable) inside and outside Taiwan (the dependent variable) in order to look at certain aspects of 'critical incidents' in intercultural service encounters.
The concept of fit in services flexibility and research: an empirical approach (Antonio J Verdú-Jover et al., International Journal of Service Industry Management, Volume 15 Number 5) looks at managerial flexibility in relation to different types of business, service and manufacturing.
They can also look at cause and effect:
In Remote control marketing: how ad fast-forwarding and ad repetition affect consumers (Brett A.S. Martin et al., Marketing Intelligence & Planning, Volume 20 Number 1), the authors look at two variables associated with advertising, notably zipping and fast forwarding, and in their effect on a third variable, consumer behaviour - i.e. ability to remember ads. Furthermore, it looks at the interaction between the first two variables - i.e. whether they interact on one another to help increase recall.
What is the hypothesis?
It is usual with quantitative research to proceed from a particular hypothesis. The object of research would then be to test the hypothesis.
In the example quoted above, Remote control marketing: how ad fast-forwarding and ad repetition affect consumers, the researchers decided to explore a neglected area of the literature: the interaction between ad zipping and repetition, and came up with three hypotheses:
The influence of zipping
H1. Individuals viewing advertisements played at normal speed will exhibit higher ad recall and recognition than those who view zipped advertisements.
Ad repetition effects
H2. Individuals viewing a repeated advertisement will exhibit higher ad recall and recognition than those who see an advertisement once.
Zipping and ad repetition
H3. Individuals viewing zipped, repeated advertisements will exhibit higher ad recall and recognition than those who see a normal speed advertisement that is played once.
What are the appropriate measures to use
It is very important, when designing your research, to understand what you are measuring. This will call for a close examination of the issues involved: is your measure suitable to the hypothesis and research question under consideration? The type of scale you will use will dictate the statistical procedure which you can use to analyse your data, and it is important to have an understanding of the latter at the outset in order to obtain the correct level of analysis, and one that will throw the best light on your research question, and help test your hypothesis.
It is also important to understand what type of data you are trying to collect. Are you wanting to collect data that relates simply to different types of categories, for example, men and women (as in, say, differences in decision-making between men and women managers), or do you want to rank the data in some way? Choices as far as the nature of data are concerned again dictate the type of statistical analysis.
Data can be categorised as follows:
- Nominal – Representing particular categories, e.g. men or women.
- Ordinal – Ranked in some way such as order of passing a particular point in a shopping centre.
- Interval – Ranked according to the interval between the data, which remains the same. Most typical of this type of data is temperature.
- Ratio – Where it is possible to measure the difference between different types of data - for example applying a measurement.
- Scalar – This type of data has intervals between it, which are not quantifiable.
Note that some of the above categories, especially 'interval' and 'ratio' are drawn from a scientific model which assumes exact measurement of data (temperature, length etc.). In management research, you are unlikely to want to or be able to apply such a high degree of exactitude, and are more likely to be measuring less exact criteria which do not have an exact interval between them.
Here are some examples of use of data in management research. This one illustrates the use of different categories:
The concept of fit in services flexibility and research: an empirical approach (see above) uses an approach which itemises the different aspects which the researchers wished to measure flexibility mix, performance and the form's general data.
This one looks at categories and also at ranked data (ordinal):
In Remote control marketing: how ad fast-forwarding and ad repetition affect consumers (also see above), the measure involved 2 (speed of ad presentation: normal, fast-forwarded) ×\ 2 (repetition: none, one repetition) between-subjects factorial design.
The following examples look at measures on a scale, which may relate to tangible factors such as frequency, or more intangible ones which relate to attitude or opinion:
How many holidays do you take in a year?
One __ Between 2 and 5 __ Between 5 and 10 __ More than 10 __
Tick the option which most agrees with your views.
Navigating my way around the CD was:
Very easy __ Easy __ Neither easy nor hard __ Hard __ Very hard __
The later type of data are very common in management research, and are known as scalar data. A very common measure for such data is known as the Likert scale:
Strongly agree __________
Neither agree nor disagree __________
Strongly disagree __________
How will I analyse the data?
Quantitative data are invariably analysed by some sort of statistical means, such as a t-test, a chi test, cluster analysis etc. It is very important to decide at the planning stage what your method of analysis will be: this will in turn affect your choice of measure. Both your analysis and measure should be suitable to test your hypothesis.
You need also to consider what type of package will you need to analyse your data. It may be sufficient to enter it into an Excel spreadsheet, or you may wish to use a statistical package such as SPSS or Mintab.
What are the instruments used in quantitative research?
Or, put more simply, what methods will you use to collect your data?
In scientific research, it is possible to be reasonably precise by generating experiments in laboratory conditions. Whilst the field experiment has a place in management research, as does observation, the most usual instrument for producing quantitative data is the survey, most often carried out by means of a questionnaire.
You will find numerous examples of questionnaires and surveys in research published by Emerald, as you will in any database of management research. Questionnaires will be discussed at a later stage but here are some key issues:
- It is important to know exactly what questions you want answers to. A common failing is to realise, once you have got the questionnaire back, that you really need answers to a question which you never asked. Thus the questionnaire should be rigorously researched and the questions phrased as precisely as possible.
- You are more likely to get a response if you give people a reason to respond - commercial companies sometimes offer a prize, which may not be possible or appropriate if you are a researcher in a university, but it is usual in that case to give the reason behind your research, which gives your respondent a context. Even more motivational is the ease with which the questionnaire can be filled in.
- How many responses will I need? This concerns the eventual size of your dataset and depends upon the degree of complexity of your planned analysis, how you are treating your variables (for example, if you are wanting to show the effect of a variable, you will need a larger response size, likewise if you are showing changes in variables).
Other instruments that are used in quantitative research to generate data are experiments, historical records and documents, and observation.
Note that some authors claim that for a design to be a true experiment, items must be randomly assigned to groups; if there is some sort of control group or multiple measures, then it may be quasi experimental. If your survey fits neither of these descriptions, it may according to these authors be sufficient for descriptive purposes, but not if you seek to establish a causal relationship.
For more information on types of design, see William Trochim's Research Methods Knowledge Base section on types of design.
What are the advantages and drawbacks of quantitative research?
The main advantage of quantitative research is that it is easy to determine its rigour: because of the objectivity of quantitative studies, it is easy to replicate them in another situation. For example, a well-constructed questionnaire can be used to analyse job satisfaction in two different companies; likewise, an observation studying consumer behaviour in a shopping centre can take place in two different such centres.
Quantitative methods are also good at obtaining a good deal of reliable data from a large number of sources. Their drawback is that they are heavily dependent on the reliability of the instrument: that is, in the case of the questionnaire, it is vital to ask the right questions in the right way. This in turn is dependent upon having sufficient information about a situation, which is not always possible. In addition, quantitative studies may generate a large amount of data, but the data may lack depth and fail to explain complex human processes such as attitudes to organisational change, or how how learning takes place.
For example, a quantitative study on a piece of educational software may show that on the whole people felt that they had learnt something, but may not necessarily show how they learnt, which an observation could.
For this reason, quantitative methods are often used in conjunction with qualitative methods: for example, qualitative methods of interviewing may be used as a way of finding out more about a situation in order to draw up an informed quantitative instrument; or to explore certain issues which have appeared in the quantitative study in greater depth.
Qualitative approaches to research design
Qualitative research operates from a different epistemological perspective than quantitative, which is essentially objective. It is a perspective that acknowledges the essential difference between the social world and the scientific one, recognising that people do not always observe the laws of nature, but rather comprise a whole range of feelings, observations, attitudes which are essentially subjective in nature. The theoretical framework is thus likely to be interpretivist or realist. Indeed, the researcher and the research instrument are often combined, with the former being the interviewer, or observer – as opposed to quantitative studies where the research instrument may be a survey and the subjects may never see the researcher.
In an interview for Emerald, Professor Slawomir Magala, Editor of the Journal of Organizational Change Management, has this to say about qualitative methods:
"We follow the view that the social construction of reality is personal, experienced by individuals and between individuals – in fact, the interactions which connect us are the building blocks of reality, and there is much meaning in the space between individuals."
As opposed to the statistical reliance of quantitative research, data from qualitative research is based on observation and words, and analysis is based on interpretation and pattern recognition rather than statistical analysis.
Miles and Huberman list the following as typical criteria of qualitative research:
- Intense and prolonged contact in the field.
- Designed to achieve a holistic or systemic picture.
- Perception is gained from the inside based on actors' understanding.
- Little standardised instrumentation is used.
- Most analysis is done with words.
- There are multiple interpretations available in the data.
Miles, M. and Huberman, A.M. (1994)
Qualitative Data Analysis: An Expanded Sourcebook, Sage, London
To what types of research questions is qualitative research relevant?
Qualitative research is best suited to the types of questions which require exploration of data in depth over a not particularly large sample. For example, it would be too time consuming to ask questions such as "Please describe in detail your reaction to colour x" to a large number of people, it would be more appropriate to simply ask "Do you like colour x" and give people a "yes/no" option. By asking the former question to a smaller number of people, you would get a more detailed result.
Qualitative research is also best suited to exploratory and comparative studies; to a more limited extent, it can also be used for "cause-effect" type questions, providing these are fairly limited in scope.
One of the strengths of qualitative research is that it allows the researcher to gain an in-depth perspective, and to grapple with complexity and ambiguity. This is what makes it suitable to analysis of particular groups or situations, or unusual events.
What is the relationship of qualitative research to hypotheses?
Qualitative research is usually inductive: that is, researchers gather data, and then formulate a hypothesis which can be applied to other situations.
In fact, one of the strengths of qualitative research is that it can proceed from a relatively small understanding of a particular situation, and generate new questions during the course of data collection as opposed to needing to have all the questions set out beforehand. Indeed, it is good practice in quantitative research to go into a situation as free from preconceptions as possible.
How will you analyse the data?
There is not the same need with qualitative research to determine the measure and the method of analysis at an early stage of the research process, mainly because there are no standard ways of analysing data as there are for quantitative research: it is usual to go with whatever is appropriate for the research question. However, because qualitative data usually involves a large amount of transcription (e.g. of taped interviews, videos of focus groups etc.) it is a good idea to have a plan of how this should be done, and to allow time for the transcription process.
There are a couple of attested methods of qualitative data analysis: content analysis, which involves looking at emerging patterns, and grounded analysis, which involves going through a number of guided stages and which is closely linked to grounded theory.
What are the main instruments of qualitative research?
Or put another way, what are the main methods used to collect data? These can be organised according to their methodology (note, the following is not an exhaustive list, for which you should consult a good book on qualitative research):
As the name suggests, this methodology derives from anthropology and involves observing people as a participant within their social and cultural system. Most common methods of data collection are:
- Interviewing, which means discussions with people either on the phone, by email or in person when the purpose is to collect data which is by its nature unquantifiable and more difficult to analyse by statistical means, but which provides in-depth information. The interview can be either: Structured, which means that the interviewer has a set number of questions. Semi-structured, which means that the interviewer has a number of questions or a purpose, but the interview can still go off in unanticipated directions.
- Focus groups, which is where a group of people are assembled at one time to give their reaction to a product, or to discuss an issue. There is usually some sort of facilitation which involves either guided discussion or some sort of product demonstration.
- Participant observation – the researcher observes behaviour of people in the organisation, their language, actions, behaviour etc.
For some examples of participant observation, see Methods of empirical research, and for examples of interview technique, see Techniques of data collection and analysis.
This is literally, the analysis of historical documents of a particular company, industry etc. It is important to understand exactly what your focus is, and also which historical school or theoretical perspective you are drawing on.
This is an essentially inductive approach, and is applied when the understanding of a particular phenomenen is sought. A feature is that the design of the research has several iterations: there is initial exploration followed by a theory which is then tested.
In Grounded theory methodology and practitioner reflexivity in TQM research (International Journal of Quality & Reliability Management , Volume 18 Number 2), Leonard and McAdam use grounded theory to explore TQM, on the grounds that quantitative methods "fail to give deep insights and rich data into TQM in practice within organizations", and that it is much more appropriate to listen to the individual experiences of participants.
This is a highly participative form of research where the research is carried out in collaboration with those involved in a particular process, which is often concerned with some sort of change.
This is when the researcher listens to the stories of people in the organisation and triangulates them against official documents.
This methodology draws on a theory which allows language to have a meaning that is not set but is negotiated through social context.
Helen Francis in The power of "talk" in HRM-based change (Personnel Review, Volume 31 Number 4) describes her use of discourse theory as follows:
"The approach to discourse analysis drew upon Fairclough's seminal work in which discourse is treated as a form of social practice and meaning is something that is essentially fluid and negotiated rather than being authored individually (Fairclough, 1992, 1995).
"For Fairclough (1992, 1995) the analysis of discursive events is three dimensional and includes simultaneously a piece of text, an instance of discursive practice, and an instance of social practice. Text refers to written and spoken language in use, while "discursive practices" allude to the processes by which texts are produced and interpreted. The social practice dimension refers to the institutional and organisational factors surrounding the discursive event and how they might shape the nature of the discursive practice.
"For the purposes of this research, the method of analysis included a description of the language text and how it was produced or interpreted amongst managers and their subordinates. Particular emphasis was placed on investigating the import of metaphors that are characteristic of HRM, and the introduction of HRM-based techniques adopted by change leaders in their attempt to privilege certain themes and issues over others."
Fairclough, N., 1992, Discourse and Social Change, Polity Press, Cambridge.
Fairclough, N., 1995, Critical Discourse Analysis:
Papers in the Critical Study of Language, Longman, London.
Discourse theory can be applied to the written as well as the spoken word and can be used to analyse marketing literature as in the following example:
Equity in corporate co-branding: the case of Adidas and the all-blacks by Judy Motion et al. (European Journal of Marketing, Volume 37 Number 7), where discourse theory is used to analyse branding messages.
How rigorous is qualitative research?
It is often considered harder to demonstrate the rigour of qualitative research, simply because it may be harder to replicate the conditions of the study, and apply the data in other similar circumstances. The rigour may partly lie in the ability to generate a theory which can be applied in other situations, and which takes our understanding of a particular area further.
Rigour in qualitative research is greatly aided by:
- confirmability - which does not necessarily mean that someone else would adopt the same conclusion, but rather there is a clear audit trail between your data and your interpretation; and that interpretations are based on a wide range of data (for example, from several interviews rather than just one). (This is related to triangulation, see below.)
- authenticity - are you drawing on a sufficiently wide range of rich data, do the interpretations ring true, have you considered rival interpretations, do your informants agree with your interpretation?
In Cultural assumptions in career management: practice implications from Germany; (Hansen and Willcox, Career Development International, Volume 2 Number 4), the main method used is ethnographic interviews, and findings are verified by comparing data from the two samples.
Reliability is also enhanced if you can triangulate your data from a number of different sources or methods of data collection, at different times and from different participants.
Dennis Cahill, in When to use qualitative methods: a new approach (Marketing Intelligence & Planning, Volume 14 Number 6), has this to say about the reliability of qualitative research:
"While there are times when qualitative techniques are inappropriate to the research goal, or appropriate only in certain portions of a research project, quantitative techniques do not have universal applicability, either. Although these techniques may be used to measure "reality" rather precisely, they often suffer from a lack of good descriptive material of the type which brings the information to life. This lack is particularly felt in corporate applications where implementation of the results is sought. Therefore, whether one has any interest in the specific research described above, if one is involved in implementation of research results – something we all should be involved in – the use of qualitative research at midpoint is a technique with which we should become familiar.
"It is at this point that some qualitative follow up – interviews or focus groups for example – can serve to flesh out the results, making it possible for people at the firm to understand and internalize those results."
Can qualitative research be used in with quantitative research?
Whereas some researchers only use either qualitative or quantitative methodologies, the two are frequently combined, as when for example qualitative methods are used exploratatively in order to obtain further information prior to developing a quantitative research instrument. In other cases, qualitative methods are used to complement quantitative methods and obtain a greater degree of descriptive richness:
In When to use qualitative methods: a new approach, Dennis Cahill describes how qualitative methods were used after an extensive questionnaire used to carry out research for a new publication dedicated to the needs of the real estate market. The analysis for the questionnaire produced a five-segment typology (winners, authentics, heartlanders, wannabes and maintainers), which was tested by means of an EYE-TRAC test, when a selected sample was videotaped looking at a magazine of houses for sale.
Planning your research design
Once you have established the key features of your design, you need to create an outline project plan which will include a budget and a timetable. In order to do this you need to think first about the activities of your data collection: how much data are you collecting, where etc. (See the section on Sampling techniques.) You also need to consider your time period for data collection.
Over what time period will you collect your data?
This refers to two types of issues:
Type of study
Should the research be a 'snapshot', examining a particular phenomenon at a particular time, or should it be longitutinal, examining an issue over a time period? If the latter, the object will be to explore changes over the period.
This refers to two types of issues:
Type of study
Should the research be a 'snapshot', examining a particular phenomenon at a particular time, or should it be longitutinal, examining an issue over a time period? If the latter, the object will be to explore changes over the period.
A longitudinal study of corporate social reporting in Singapore (Eric W K Tsang, Accounting, Auditing & Accountability Journal, Volume 11 Number 5) examines social reporting in that country from 1986 to 1995.
Sometimes, you may have 'one shot' at the collection of your data - in other words, you plan your sample, your method of data collection, and then analyse the result. This is more likely to be the case if your research approach is more quantitative.
However, other types of research approach involve stages in the collection of data. For example, in grounded theory research, data is collected and analysed and then the process is repeated as more is discovered about the subject. Likewise in action research, there is a cyclical process of data collection, reflection and more collection and analysis.
If you adopt an approach where you combine quantitative and qualitative methods, then this methodology will dictate that you do a series of studies, whether qualitative followed by quantitative, or vice versa, or qualitative/quantitative/qualitative.
Grounded theory methodology and practitioner reflexivity in TQM research (Leonard and McAdam, International Journal of Quality & Reliability Management, Volume 18 Number 2) adopts a three-stage approach to the collection of data.
Doing the plan
The following are some of the costs which need to be considered:
- Travel to interview people.
- Postal surveys, including follow-up.
- The design and printing of the questionnaire, especially if there is use of Optical Mark Reader (OMR) and Optical Character Recognition (OCR) technology.
- Programming to "read" the above.
- Programming the data into meaningful results.
- Transcription of any tape recorded interviews.
- Cost of design of any internet survey.
- Employment of a research assistant.
Make a list of the key stages of your research. Does it have several phases, for example, a questionnaire, then interviews?
How long will each phase take? Take account of factors such as:
- Sourcing your sampling frame
- Determining the sample
- Approaching interview subjects
- Preparations for interviews
- Writing questionnaires
- Response time for questionnaires (include a follow-up stage)
- Analysing the responses
- Writing the report
When doing a schedule, it's tempting to make it as short as possible in the belief that you actually can achieve more in the time than you think. However, it's very important to be as accurate as possible in your scheduling.
Planning is particularly important if you are working to a specific budget and timetable as for example if you are doing a PhD, or if you are working on a funded research project, which has a specific amount of money available and probably also specific deadlines.