Product Information:-

  • For Journals
  • For Books
  • For Case Studies
  • Regional information
Real World Research - #RealWorldResearch
Request a service from our experts.

How to evaluate teaching

Options:     Print Version - How to evaluate teaching, part 3 Print view

How to evaluate

Effective evaluation depends on keeping the principles listed in the previous section in mind, knowing what you are evaluating, and designing the instrument accordingly. A fit-for-purpose instrument will be valid; one that measures consistently among students and over time will be reliable (Center for Excellence in Teaching and Learning, 2010).

No one instrument is adequate for evaluation, and you may need to triangulate using other instruments, for example questionnaires can be combined with interviews or focus groups.

What to evaluate

If you are evaluating a particular course, you need to design an instrument that reflects the totality of that course. This means not just the class element, but also course design, organization, assessment, etc. You can then group the questions according to factors.

For example, you will want to include questions about:

  • Delivery of taught material: does the lecturer present things in a clear, organized way? Does he or she make the material interesting?
  • Assignments: are these appropriately paced, or too bunched? How helpful was feedback?
  • How helpful were resource materials, course websites, etc.?
  • The overall organization of the course, for example pacing.
  • Access to facilities, such as computing, and library.
  • Interactivity and student-centred approaches, for example group work (learning from students often gets overlooked in evaluation, focusing as it does on the lecturer).
  • The effect of the course on students' learning.

You may not want to evaluate the whole course, but particular aspects of it, for example the use of Twitter. Or, you may want to measure the attitudes of the students themselves, for example, how does their attitude towards learning change over time? In which case, you will want several evaluations, each at different stages.

Try and focus on the taught rather than the teacher: you should be more interested in what the students learned, rather than the teacher's personality and its effect on the students. Look particularly at your department and institution's criteria for effective teaching, and use these as guidance on what to evaluate.

Consider who to ask. Students may have valid views on some issues, such as whether or not the lecturer makes effective use of class time, but they cannot be expected to know whether or not the lecturer is up to date with the latest research (Gross Davis, 1993).

There are some issues that your colleagues are in a better position to assess, such as course aims, content, and material, possible assessment methods, and new instructional methods. You may also want to ask yourself how something went, perhaps writing a reflective statement at the end of a course or keeping a diary in which you record your perceptions of a particular class interaction.

Methods of evaluation: The questionnaire

The questionnaire is not the only instrument which can be used for evaluation, however, it is the most common. Here are some points to bear in mind when designing it.

  • Use both open and closed questions, so that you reveal both quantitative and narrative data. The latter provides opportunity for the student to reflect and elaborate on their experiences.
  • For quantitative questions, avoid just using "yes/no" answers, which are very easy to fill in mindlessly. Instead, use a Likert rating scale, either 5 or 7-point, with 1 being the lowest rating and 5 or 7 the highest. These are useful for calculating an average response from the class.
  • You can also test the same dimension twice by using two different questions which ask similar things in different ways (Johnstone, 2005). Note how this is done in the example shown below in "Figure 1. Example of a questionnaire", taken from the University of Glasgow Centre for Science Education (MacGuire, 1991, quoted in Johnstone, 2005).
  • Group questions according to theme, and have a number of different questions in each.
  • Ask general questions about the teacher, but avoid the subjective. For example not, "Do you respect this teacher", but something along the lines of, "How high do you rate the teacher's overall effectiveness?".
  • Make sure the questions are clear, and avoid anything ambiguous. For example, "The instructor is well prepared and marks work fairly" confounds two issues (Gross Davis, 1993).
  • Make the form as short as possible. Questionnaire fatigue can easily set in.

Figure 1. Example of a questionire (from University of Glasgow Centre for Science Education.

Figure 1. Example of a questionnaire (from the University of Glasgow Centre for Science Education)

Not all questionnaires are based on the one-time experience of a course: some are instruments measuring attitude and are designed to be repeated several times.

For example, Perry's Model, which is popular in some parts of science education, looks at the way students mature from wanting to be spoonfed by the lecturer, to greater independence and questioning. It has been used to measure change from a cramming approach to one based on problem-based learning.

Longitudinal measurement is necessary to give a series of snapshots at different points in time. Students are presented with a series of statements to choose from and have to select the one they agree with (Johnstone, 2005). The questionnaire is supplemented by interviews as a way of providing richer data.

When it comes to administering the questionnaire, bear the following points in mind (Center for Excellence in Teaching and Learning, 2010; Gross Davis, 1993):

  • Set aside a time for students to fill in the questionnaire – perhaps 15 minutes at the end of the final session, or perhaps the week before the final session.
  • Ensure that students understand the purpose of the exercise – that it is part of your attempts at continuous course improvement.
  • Assure them of anonymity.
  • Get someone who is not the faculty member taking the course to collect up the questionnaires, and take them to the faculty office.
  • Do not look at the forms until you have finished grading the course.

When carrying out the analysis of the questionnaires, the following are useful guidelines (Center for Excellence in Teaching and Learning, 2010; Gross Davis, 1993):

  • Establish a unit of analysis, such as class average for a response to a particular question.
  • Keep courses separate: aggregating data will confuse trends.
  • Check the number of students who completed the forms against the class enrolment. Be cautious about a low completion rate, and do not attempt to summarize data if there are fewer than ten forms.
  • Prepare summary statistics for the quantifiable questions: frequency distribution, average response, standard deviation, departmental or other norm for comparison.
  • Summarize narrative comments for each question. Group the summary under headings, noting the number of comments under each heading. Bear in mind course aims and objectives and departmental goals.
  • For quantifiable questions, note your highest and lowest rated items. Do they reveal strengths and weaknesses which cluster in patterns, say on organization of material?
  • From the narratives, identify particular problems. Are complaints justified?
  • Look at factors that could influence the course – for example is it a large or small class? Is it one you are used to teaching?
  • Try and obtain the help of an experienced colleague, or perhaps someone in a teaching support unit, to go through the questionnaires with you.

Other methods of evaluation

Other ways of obtaining feedback on teaching include more "conversational", qualitative methods. For example, interviews with a small percentage from the questionnaire population or structured focus groups are often used to supplement questionnaires, which can yield rather bland data.

Narrative methods, too, are becoming popular. For example, a PhD student at London University's Institute of Education used a biographical narrative interpretive method to obtain rich accounts of student experience on an online course. She found it particularly effective for probing students who were reluctant to engage.

One student, for example, described how he did his coursework after he had finished his bar shift at 1am in the morning, as it was the only time he could gain access to a computer. Another, a refugee, used her first pay cheque to buy a computer and then had to negotiate access with her family. This is not the sort of data that you can easily obtain from a questionnaire.

There is only so much information that can be gleaned from students: one's peers also provide a useful source. The standard form of peer evaluation is the observed lesson, and observation has been built into quality assurance practices, for example in the UK. However, as with other evaluation methods it is most valuable when it is developmental, and is particularly useful for mentoring someone new to teaching.

The observer should have a checklist for what to look for, which may for a lecture include the clarity of the session, the aims and objectives, the delivery, the engagement of students in the learning, and opportunities for interaction. For a seminar or small group activity, the list should include facilitation skills, interaction, encouraging all students to participate, feedback, and helping students with their learning goals (Fullerton, 2003).

Teachers can evaluate themselves through reflecting on the quality of their teaching, either as a whole or as a result of a particular class or interview with a student.

Some institutions or professional bodies may require teachers to submit a teaching portfolio. This is a collection of documents which provides evidence of the work done and skills developed in teaching. The following are some examples of what it could include:

  • Student ratings and peer ratings or observations.
  • Examples of courses developed or re-designed.
  • Instructional materials, course textbooks, etc.
  • Examples of innovative teaching.
  • Pedagogical research.

More information on teaching portfolios can be found in Fry and Ketteridge (2003).


Evaluation can help towards the development of excellent teaching and a better student experience.

The key to good evaluation is to treat it like you would a research project: focus, know what you want, choose a methodology which is in keeping with your purpose, draw out findings which are legitimate, and consider their implications.

Traditional evaluation sheets issued post course tell us how satisfied students were, but may not reveal much about what they learned. Good evaluation should do just that. It should also reveal more than who should be promoted, or offered tenure, but also what needs to be done by the institution as a whole to support teaching.