How to...
Conduct experiments

An experiment is a deliberate attempt to manipulate a situation, in order to test a hypothesis that a particular cause creates a particular effect, in other words that varying the input will affect the output.

A procedure adopted on the chance of its succeeding, for testing a hypothesis etc., or to demonstrate a known fact. Oxford Dictionary of English

The experiment in management research

What is an experiment?

In the scientific method, an experiment ...is a set of actions and observations, performed in the context of solving a particular problem or question, to support or falsify a hypothesis or research concerning phenomena. Wikipedia

The experiment is the cornerstone of the scientific, positivist approach to knowledge, and the basic method of the natural sciences. Much of what we know about the natural world we know through experiments.

The following are its key characteristics:

  • It is a structured and manipulated process, a deliberate imposition of a treatment.
  • It has a number of independent variables, as causes or inputs, and one dependent variable, or effect or output, with the goal being to see how changing the former affects the latter.
  • It needs to control other variables which might cause the observable changes in the dependent variable, so that you can isolate all possible reasons why the selected variable might behave that particular way.
  • It usually tests a hypothesis, derived from a particular theory.

"Basically, an experimental design requires several factors: a setting where the real world can be simulated, one or more independent variables that can be varied, and resultant effects on dependent variables which can be observed."

Jacob, F. and Ehret, M. (2006) "Self-protection vs opportunity seeking in business buying behavior: an experimental study", Journal of Business & Industrial Marketing, Vol. 21 No. 2

The experiment is a particularly useful method to explain change, to look at cause and effect, or to deduce a hypotheses from a theory. An important proviso is the ability to isolate the independent, or causal, variable from other causes of the particularly effect you are examining.

In a biological experiment, we can vary the effect of the light (the independent variable) on a plant, and so show how light affects plant growth. It is possible to grow the plant in laboratory conditions, from which other factors can be excluded.

The experiment in management research – drawbacks

Maylor and Blackmon (2005, pp. 202-3) point out how important it is, in drawing up a hypothesis, to ensure that the cause of A is B and not C or D. In order to do this, you need to isolate the causes, and examine each in turn. In a laboratory, you would set up experimental conditions for each factor, test each and assess its likely impact on the dependent variable.

In scientific experiments it is usually possible to create conditions that exclude other possible causes from the one you are examining – that is part of the function of a laboratory. Humans, however, operate as part of social organisms, which are inevitably more complex and difficult to categorise than natural organisms.

Imagine you are investigating the cause of absenteeism at work. You hypothesise that the cause is stress. Senior management, however, believe that its cause is inadequate supervision. How would you set up a measure for stress, or exclude other factors? How easy would it be to investigate stress, if management had other ideas? 

The above example illustrates three difficulties in the experimental method in management: the difficulty of measuring aspects of human behaviour, of disentangling causes, and the fact that many of the environments where you are likely to undertake field research may well be subject to other influences creating conditions which may be outside your control, and unsympathetic to your need to prove a particular hypothesis.

Furthermore, humans have the attribute of consciousness, which makes observation difficult; they may behave differently if they know they are being watched, for example they may adopt behaviours which they think are expected of them. The following quote from the famous sociologist Anthony Giddens is also applicable in the business and management area:

"An experiment can...be defined as an attempt, within artificial conditions established by an investigator, to test the influence of one or more variables upon others. Experiments are widely used in the natural sciences, but the scope for experimentation in sociology is limited. We can only bring small groups of individuals into a laboratory setting and in such experiments, people know they are being studied and may behave differently from normal". (Giddens, 1989)

The experiment in management research – advantages

Despite its drawbacks, experiments have been used in management research including some famous ones.

Harvard Business School professor Elton Mayo studied the productivity of workers in the Hawthorne Plant of the Western Electric Company in Cicero, Illinois in the 1920s, with a view to determining what affected worker productivity. The researchers manipulated the conditions of the workers in various ways, and came to a number of conclusions:

  • Individual aptitude is a poor predictor of performance.
  • There was a "group life" amongst the workers which affected performance.
  • Each group had its own norm of a fair day's work.
  • The workplace is a social system.

However, they also observed that productivity tended to increase whatever the conditions, and came to the conclusion that observation had an impact on performance – which substantiates the point made by Giddens (see above).

In 1911, Frederick W. Taylor published The Principles of Scientific Management in which he looked at how the application of scientific method could aid productivity. He introduced time and motion studies, which looked at the sequence of motions used to perform a job, and expounded the idea of scientific management, which comprised:

  • Replacing rule-of-thumb work methods with scientific ones.
  • Placing emphasis on the importance of training.
  • Ensuring that scientific methods are followed on an ongoing basis.
  • Dividing work up between workers and managers, with the workers carrying out the tasks and the managers implementing scientific management in order to plan work.

It would almost be true to say, therefore, that the science of management owes its genesis partly to experimental design!

One of the key advantages of experimental design in management research is the fact that it requires "a setting where the real world can be simulated". The advantage of a simulation is that you can set up an imaginary situation with realistic elements, so you are not dependent on the constraints of the real world. Thus, if you want to investigate buying behaviour, or reaction to brands, you are not dependent on finding real buyers buying real products, or reacting to real brands. This means that you can set up the variables to reflect the hypotheses that you want to test. Used in conjunction with the questionnaire (see using questionnaires effectively), the experiment can help yield some quite sophisticated information on attitudes and behaviour (see the examples in types of experiment).

Experimental design can also provide excellent opportunities for observing behaviour – both the Hawthorne and the Taylor experiments used forms of observation, and yielded interesting results.

However, experiments differ from observation in that they deliberately attempt to manipulate a situation, as opposed to observing what is there, or else, as with Taylor, fitting what is observed into a framework. The Hawthorne researchers may have observed, but their presence changed the workers' environment and conditions. This may well be beyond the researcher's control and can be a cumbersome process – the Hawthorne research took five years because of the difficulties in manipulating the physical conditions.

The experimental method also differs from the survey in that it seeks to explain causes, while surveys look at relationships between variables (in the absenteeism example quoted above, a survey could be used to ask staff members what their reasons for absenteeism were, but these would merely yield related factors rather than proven causes).

In summary, the experiment remains of value in management research, although it is used differently and "pure" experiments remain relatively rare. As an undergraduate or MBA student, you should probably use an experimental design with extreme care, and certainly under the close counsel of your supervisor.

References

Giddens, A. (1989), Sociology, Polity Press, Cambridge, UK

Mayor, H. and Blackman, K. (2005), Researching Business and Management, Palgrave Macmillan, Basingstoke, UK

Some design considerations

On this page, we shall look in more detail at the design considerations that create the best conditions for experiments. (We shall look at particular designs in the next section, Types of experiment.)

True cause and effect

An experiment tests a hypothesis that is deduced from a theory.

In Self-protection vs opportunity seeking in business buying behavior: an experimental study (Journal of Business & Industrial Marketing, Vol. 21 No. 2), Frank Jacob and Michael Ehret use an experiment to test Prospect theory, according to which successful economic agents tend to be more self protective, with under performers taking a bigger risk.

However, as we saw in the previous section, successful experimentation depends on being able to isolate and exclude other factors, i.e. to prove the hypothesis that X is the cause of Y, you have to exclude A, B or C. In a scientific experiment, you would be able to set up laboratory conditions that looked at A's, B's and C's effect on Y independently; in a business, it may not be so easy to do this.

To take a very simplistic example, suppose you were to assess the effect on productivity of increasing the temperature in the office. Assuming you were to be able to establish a measure of productivity, you would need to check that there was no other cause of the (presumed) decrease in productivity – such as workload, time of day, the work taking place after a sociable lunch where alcohol was consumed, or an e-mail having just been issued about proposed redundancies!

It is important to identify the hypothesis you are testing very precisely, as well as all the possible variables, and:

  • create a means for establishing which of the variables is the cause
  • prove not merely that X has the effect of Y, but also if X is absent,
    then so is Y.

Experiments can be most effective if you can limit the number of variables you are looking at.

Imagine for example you had two groups of workers, one of whom had had a particular form of training and the other had not. If you compare performance measures of the two groups against attendance on the course, you can tell whether or not there is a relationship between course attendance and performance.

Sequence

So far, however, all you will have done is to prove a relationship between the two variables. To indicate cause and effect it is necessary to look at the sequence in time, and to prove that the dependent variable, in this case job performance, followed the training course. You would need, therefore, to observe performance (or look at performance records) both before and after the course.

Developers of educational software will often plan "experiments" with their courseware by carrying out trials with students. In order for these trials to be effective, however, it is necessary to take "measurements" (of motivation, aptitude or whatever the claim is that the software can help with) both before and after the trial.

Experimental treatment

There is a scientific protocol for eliminating alternative causes, which involves defining variables and separating out those which it's important to keep constant, in order to minimise confusion.

Variables are thus categorised as follows:

  1. Experimental variables – the inputs – in the above examples, it would be the office temperature, or the training.
  2. Dependent variables – the effect on the output - worker productivity or performance.
  3. Controlled variables – factors that you need to hold constant, such as the time of day, the aptitude of employees attending the training.
  4. Uncontrolled variables – factors over which you do not have control, for example, in the case of the temperature experiment, the liquid lunch or the redundancy e-mail.

The control group

The control group is a vital principle in experimental design, and involves having a group which does not receive treatment, for comparison purposes.

In the above examples, it could be that groups of workers are not subjected to increased temperature/training.

A common design occurs in double-blind drugs trials, where one group is treated with the drug and the other is not; neither group knows which group has the treatment and which has the placebo.

You need however to ensure that your two groups, treatment and non treatment, are matched. In order to do this, you need to pay attention to sampling.

Sampling

Jankowicz (2005, pp. 237-8) suggests two possible approaches to sampling, both of which depend on a fairly large population and on the researcher knowing quite a bit about the group:

  • purposive sampling, in which you deliberately select groups which have the same characteristics
  • random sampling – as the name suggests, this depends on random assignment to the group, with the effect that additional factors and differences are also randomly assigned.

Meneses and Palacio (2006) maintain that convenience samples are good when you are dependent on co-operation with your subjects – see Quasi experiments.

Random assignment

A principle of experimental design is that of random assignment, which means assigning people to groups on a random basis, from a common pool, in order to cancel out group differences which might otherwise occur, and ensure similarity in the groups.

A recent experiment concerned the effect of prayer on heart patients awaiting operations for arteriosclerosis (blocked arteries). As soon as subjects were recommended for the operation, they were randomly assigned to one of two groups, one of which were prayed for while the other was not.

As above, you would need to have sufficient control over the situation to be able to assign people on this basis.

Treatment groups

In the literature describing experimental design, you will often find reference to "between-subjects design" and "within-subjects design".

  • between-subjects design occurs when two or more groups are compared. The groups are comparable but are subject to different treatment.
  • within-subjects design occurs when one group is subject to two different treatments, as when, for example, a class does a test at two different points in time.

Measurement

You need to find an appropriate measurement for your variables. One form of measurement which is often used in management experiments is the questionnaire. Questions may be factual, e.g. position in organisation, salary band etc., or may be more sophisticated, designed to test attitude or behaviour. You will obviously need to do give careful thought to your questions, and you may well find that the literature surrounding your hypothesis provides you with some useful measures, as in the examples below. You can then tabulate the responses and compare the independent and dependent variables.

The following are some examples of questionnaires, as well as their analysis, used in experiments.

In "Self-protection vs opportunity seeking in business buying behavior: an experimental study" by Frank Jacob and Michael Ehret (Journal of Business & Industrial Marketing, Vol. 21 No. 2), the authors provide an example of a questionnaire used to assess decision-making.

In "The effect of strategic and tactical cause-related marketing on consumers' brand loyalty" by Douwe van den Brink et al. (Journal of Consumer Marketing, vol. 23 no. 1), the authors use a questionnaire to measure both attitudes and behaviour, using commonly accepted scales.

In "Different kinds of consumer response to the reward recycling technique: similarities at the desired routine level" (Asia Pacific Journal of Marketing and Logistics, vol. 18 no. 1), Gonzalo Díaz Meneses and Asunción Beerli Palacio use three questionnaires over a period of time with Likert-type scales to measure ecological conscience.

Alternatively, you can use information kept by the organisation, such as sales performance figures, or a form of experimental measurement, as in the length of time taken to perform certain tasks in Taylor's time and motion study.

Whatever measurement method you choose, you will need to tabulate your data, and look for a systematic relationship between the dependent and independent variable, and having done so, subject the data to appropriate statistical tests. If you are not familiar with these, look at our articles on using statistical tests.

Minimising bias

Bias can be a threat to the validity of experiments:

  • The experimenter can introduce errors in the recording of the data or, simply by virtue of having certain expectations of the outcome, design for this outcome in mind. This is a case of self fulfilling prophecy rather than fraud.
  • The subjects can alter their behaviour to accommodate the perceived expectations of the experimenter, or the group can fail to reflect accurately the population at large.

Such possible biases need to be taken account of when you design the experiments.

Ethical considerations

There are many ethical issues to consider with experiments, and you would do well to check with your university as to whether they have any policies. For example, could the subjects come to any harm by either participating in the experiment, or would they feel disadvantaged if they did not participate (if they had training withheld for example)? Are there issues of confidentiality?

It's a good idea to get participant's informed consent before their participation, and explain the purpose of the experiment to them.

When is an experiment a true experiment?

When the following criteria are observed:

  • experimental treatment – possible variables are isolated
  • presence of a control group 
  • random assignment
  • measurement before and after treatment.

For the design of an experiment to be rigorous, you need to set up a contrived setting, which is difficult in the real world. Particularly when you are dealing with large groups or complex systems, experiments can be difficult because of the large number of variables; getting control of your sample can also be a problem.

"The bottom line here is that experimental design is intrusive and difficult to carry out in most real world contexts. And, because an experiment is often an intrusion, you are to some extent setting up an artificial situation so that you can assess your causal relationship with high internal validity. If so, then you are limiting the degree to which you can generalise your results to real contexts where you haven't set up an experiment. That is, you have reduced your external validity in order to achieve greater internal validity."

Trochim, W. M. (2006), The Research Methods Knowledge Base, available at http://www.socialresearchmethods.net/kb/ [accessed 23rd April 2007]

References

Jankowicz, A.D. (2005), Business Research Projects, Fourth Edition, Thomson, London

Meneses, G.D. and Palacio, A.B. (2006), "Different kinds of consumer response to the reward recycling technique: similarities at the desired routine level", Asia Pacific Journal of Marketing and Logistics, Vol. 18 No. 1.

Types of experiment

Laboratory experiments

A laboratory experiment is one that takes place in a situation isolated from what is going on around it, as in a laboratory for scientific experiments. The whole purpose of a laboratory is to create conditions where possible causal factors can be dealt with in isolation.

In management research, it is relatively unusual to set up an experiment in a laboratory: the term is used figuratively to refer to a setting outside the distractions of normal working life, probably a room chosen and set aside for that purpose.

It needs to conform to the conditions described in How to conduct experiments The location will generally be set up specifically for the experiment, and subjects are expected to behave according to a prescribed pattern, for example looking at a piece of courseware, sampling a product etc.

Examples of laboratory experiments include:

  • Testing reactions to a food product, for example a few years ago people were asked if they could tell the difference between butter and margarine.
  • Testing educational software – participants are sat in a computer lab and their use of the software observed.
  • The reality TV show Big Brother, where participants are isolated in a specially built house.

A laboratory experiment creates a highly contrived situation and some consider it inappropriate for investigating complex phenomena which are dependent on social interaction or organisational dynamics, such as how people relate to change. On the other hand, some have used its very contrived nature to create simulations and scenarios and invite response.

In "Self-protection vs opportunity seeking in business buying behavior: an experimental study" (Journal of Business & Industrial Marketing, Vol. 21 No. 2), Frank Jacob and Michael Ehret describe how they use a laboratory design to create a simulated environment where industrial buying behaviour can be investigated. (In a field setting, it would presumably not have been possible to create the conditions or control the variables for such a complex subject.) Participants are classified into subgroups (or levels of variables) according to the hypothetical performance of their division (under or over achievement). The measurement tool was a questionnaire.

In "The effect of strategic and tactical cause-related marketing on consumers' brand loyalty" (Journal of Consumer Marketing, Vol. 23 No. 1), Douwe van den Brink et al. describe an experiment conducted with 240 participants on the effect of cause-related marketing. Although the setting was actually a library, the scenario used was a simulated one ("Story boards about a non-existing company, brand and CRM campaign were used as stimulus materials") and the location chosen for its quietness allowing participants to concentrate. The measure was a questionnaire using scales, and the data was analysed with a t-test and ANOVA. The design is described as a "two-by-two between subjects design".

In "An empirical analysis of the brand personality effect" (Journal of Product & Brand Management, Vol. 14 No. 7), Traci H. Freling and Lukas P. Forbes conduct a highly structured experiment that looks at the role of personality in brand strategy and development. The research took place in a classroom with subjects being randomly assigned to different groups each of which was given a separate vignette with product information and comments suggestive of a particular personality. All subjects were handed a booklet which gave an introduction to the project, instructions, stimulus material and the measures, and were required to write down their thoughts and fill in the questionnaire.

Field experiments

The difference between a field experiment and a laboratory experiment is that the former takes place in a natural setting as opposed to a contrived one – for example a classroom, an office, a shop, shopping mall, factory etc. The setting is realistic, which has the advantage that you are not imposing artificial conditions but the disadvantage that you will have less control.

Pragmatic considerations may make field experiments more common in the social and management sciences.

In "Differential effects of price-beating versus price-matching guarantee on retailers' price image" (Journal of Product & Brand Management, Vol. 14 No. 6), Pierre Desmet and Emmanuelle Le Nagard describe an experiment on the effects of low price guarantees which took place in a shopping mall using face to face interviews along with stimulus materials in the form of an advertisement.

Some experimental designs

The most common form of experimental design is the pre-test post-test randomised design, which, as the name suggests, randomly assigns to groups, has a control group, and measures both before and after the experimental programme.

There are a number of different variants on this design: some of the most common are listed below.

Two-group experimental design

This is a post-test only randomised experiment, where the effect of a particular programme on two groups is examined. The group participants are randomly selected, and the main interest is in seeing the difference after the programme, hence the term post-test. The difference is measured using a T test or one-way analysis of variance (ANOVA). This is one of the best tests for measuring cause and effect, and, requiring only one test, it is relatively cheap to administer.

Factorial designs

This is a useful design when we want to examine the effect of variations within factors. For example, we might want to examine the effect of temperature and time of day on worker productivity: these would be the key variables but the different times and temperatures would be the levels. We can use this design to explore the interaction between levels and factors – for example, whether a hotter temperature has a worse effect at different times of day. The number of factors would be expressed as: n x n, with the number values indicating the number of levels – for example if we had three different times of day and four variations of temperature, we would call the design a 3 x 4 factorial design. We would analyse the design using a regression model.

Randomised block designs

Similar to stratified random sampling, this involves dividing your sample into homogenous groups, and then repeating the experiment within each group. For example, if you were conducting an experiment in an organisation you might want to divide people up according to department or function. The reason for so doing is to reduce overall variation. Again, you would analyse using a regression model.

Covariance designs

This term is used when although the design the basic pre-test post-test randomised variety, but variables are adjusted to remove extraneous effects.

Hybrid experimental designs

These are designs which combine features of the more established designs described above. For example:

  • The Solomon four-group design is a way of triangulating the effects of testing. Two groups receive the treatment, and two do not; only one of each group has a pretest.
  • The Switching replication design is a way of overcoming the ethical objection of giving a treatment to one group but not another, simply by switching groups around so that the first control group becomes the treatment group in the next phase of the experiment, and vice versa.

Quasi experiments

These lack the rigorous conditions of "true" experiments, i.e. manipulation of variables, random assignment etc. They occur when the researcher takes advantage of naturally occurring events to implement some aspect of experimental design, for example a before and after measurement. The researcher's role is reduced to that of an observer; he cannot manipulate or control the conditions of the experiment. He also faces difficulties of unobtrusive observation, defining an appropriate measure for the dependent variable, and lack of control over variables.

Examples of natural events could be a strike, a threat of redundancy, a new policy which is implemented in some departments and not in others, a training course which only some managers go on. Such events create the possibility of a before and after measurement, or a control group - both aspects of experimental design. Not all criteria –- isolation of variables, control group, random assignment, before and after measurement – are present, hence the term "quasi experiment".

The big advantage of such experiments, however, is that they take advantage of natually occurring events, and they can thus offer useful triangulation with other research methods.

In "Different kinds of consumer response to the reward recycling technique: similarities at the desired routine level" (Asia Pacific Journal of Marketing and Logistics, Vol. 18 No. 1), Gonzalo Díaz Meneses and Asunción Beerli Palacio use a contrived situation but is based on a convenience, and hence not randomised, sample. Because of the lack of randomisation, the experiment is not a true one, but the sampling method is deliberately chosen as the authors claim that:

[convenience sampling] is recommendable when the collaboration of those surveyed requires, as in the case of this longitudinal research, intensive questionnaire completion. Furthermore, if those surveyed belong to the same social network as the surveyor, there is greater opportunity for observation and control of the individuals in the experiment

Volunteers apply the treatment to a member of their household, who has to fill in three questionnaires over a period of time, looking at whether rewards, or beliefs, affect recycling behaviour.

Some quasi-experimental designs

Non equivalent group design

This is very similar to the pre-test post-test randomised design, but lacks randomisation. Two groups are selected for their similarity, but they are not as similar as if assignment had been purely random, hence the name.

Selection regression

The distinguishing feature of this type of design is the way it assigns to groups – people are measured prior to the programme and are assigned based on their score. The basic design is a pre-test post-test two-group design, with measures before and after the programme. The advantage is that assignment is based on need – for example, the sickest patients for a drug, the lowest scoring children for a remedial programme.