Managing a student research project
Projects, dissertations and theses
Whether you are a full- or part-time student in business education, you are likely to be required to complete projects of one kind or another. In some is the project forms a relatively minor part of a course; in others, the project is virtually the whole basis on which an award is made. With a research project, at whatever level, the agenda is set by the student to a greater extent than in a taught course. Similarly, the student bears responsibility for the quality of learning that takes place in the project and for the eventual written outcome. In other words, how well you do is largely up to you!
A good working definition of research is:
"seeking through methodical processes to add to your own knowledge and, hopefully, to that of others, by the discovery of nontrivial facts and insights"
It's worth looking at this definition in a bit more depth. Research is about methodical processes – not random ones. There is an assumption of a plan to find things out and capture and report on what has been found out. This planning and organization, being methodical, is generally called the research methodology.
Research in a university setting is about adding to your own knowledge, and demonstrating to others – assessors, examiners and markers – that this has been done. For doctoral research, and some master's students, it's expected that genuine new knowledge will be created by the student in the area studied. With truly original research, it would be self-evident that a student's own knowledge would have been added to. But in all projects undertaken for educational purposes, where no genuine new insights have been uncovered, it is necessary for students to demonstrate that they have learned something themselves. This may differ from a piece of business research, where an organization may be interested in an insight to be applied in its business, without the presumption that the researcher him/herself has learned anything.
Research deals in non-trivial facts and insights. It is the framing of the project and the separation of the trivial from the non-trivial; the relevant from the irrelevant; which marks well-crafted research projects at any level. Research deals with both facts, which presume that something has been observed and noted or described, and insights, which presumes that some useful and relevant explanation has been drawn from the study.
A research project has many different purposes. Four common ones are:
- to review existing knowledge;
- to describe some situation or problem;
- the construction of something useful;
The review of existing knowledge is a very common type of student research project, particularly in diploma, undergraduate and taught master's courses. It can provide excellent research training with the added advantage that it requires little by way of resources save access to the relevant literature.
Although descriptive research may appear to be less demanding than other types this is often not so. However, due to the lack of knowledge of a subject or research methods, or both, it is quite possible that the purpose of a student's first study will be to describe something, particularly if there has been little previous research in the field.
The construction of something which is useful is an outcome of research which increasingly is being favoured by organizational sponsors and research funding bodies in business and management. In the physical sciences and engineering, students may be recruited to pursue a particular line of research such as the construction of a new type of robotic operation or artificial intelligence system.
Explanation is the ideal of all professional research workers. It is only when causal rather than statistical relationships are identified that generalizations may be made or laws formulated.
All institutions who offer degrees publish regulations for the guidance of candidates. Before beginning a research project, it is clearly important to read carefully and annotate the guidelines, and we would strongly recommend that the guidelines are again read carefully before beginning to actually write the project. Not all guidelines express clearly and precisely what is expected, and in any case, students should always make the institution's expectations of standards an early (and documented) topic of discussion with a supervisor or department head.
For example, the criterion for the degree of Master of Philosophy by research at one university was that an MPhil thesis should display:
"a good general knowledge of the field of study; a comprehensively particular knowledge of some part or aspect of the field of study; some original contribution to knowledge or understanding".
and a PhD thesis should display:
"an original contribution to knowledge or understanding".
Questions and clarifications which might arise from these stated requirements might be, for example:
- How comprehensive is a "comprehensively particular knowledge" expected to be?
- Is the PhD also expected to demonstrate a "good general knowledge in the field of study"?
- How should the difference between "some original contribution" and "an original contribution" be judged? Is this the only significant difference of depth between the two?
Guidelines will always embody both "compulsory" elements which require students to conform to some standard, and "free" elements where students can display their own approach. Thus, under compulsory elements the need to observe certain typographical standards – for example, that text be typed double-spaced with at least a 40 mm margin on the left and a 25 mm margin on the right – should be noted. Similarly, the work of others must be properly referenced, with some institutions prescribing the form that citation of particular types of work such as journal articles should take. Non-negotiable guidelines on matters such as binding, appendices, margins, figures, and pictures are almost always given. Students should simply note and conform to them. Not to do so will usually automatically lose marks.
A useful way of coming to grips with requirements is for a student to scrutinize theses or dissertations in their library. However, the research reports produced by students who were successful in the past can for a variety of reasons be an imperfect guide to present standards. Modestly written reports may have been redeemed by a brilliant defence at an oral examination. The standards of the field may have changed as more research has been completed.
First degree and diploma projects
This category includes studies which form part of courses at the first level of higher education, many of which are referred to as "degree equivalent". Analytical rigour, particularly in terms of extensive quantitative testing, is not usually demanded. However, independent enquiry – gathering data from outside classes, and making sense of that, through exercise of judgement – is normally expected. Most courses will ask for a reasonable standard of presentation of the results. In some cases, students will be expected to display competence in specific areas – for example, the ability to collect data.
It is customary for the projects to comprise part of the student's assessment. Rarely will projects represent less than 10 per cent of a particular year's assessment. In some instances (for example, the sandwich course with a whole year in industry) the project may be the only academic assessment made during a part of the course.
With the requirement for independent enquiry comes the need for planning how the enquiry should be pursued. What is often not appreciated by tutor or student is that there is much more to research methodology than may initially be supposed. Time devoted to studying the research process is therefore a worthwhile investment.
Although research is a vital element of further education it should not be assumed that all tutors have had significant research experience. Academic staff may have been overseeing first degree or diploma projects for many years, but this does not guarantee competence in research methods.
If a student can demonstrate that he or she can display competence in, and has learned about, the process of research, and not just about the subject areas studied, this is likely to lead to a more successful outcome in terms of grading.
Taught master's degree dissertations
A feature of recent decades has been the growth in master's degrees obtained by "study and dissertation", such as the Master of Business Administration (MBA). Thus, typically, a one-year full-time course may comprise nine months of taught courses, with three months being available for a project to be written up as a dissertation. A period of the order of three months is insufficient to enable much more than a descriptive account about a line of enquiry. The absence of validation (very rigorous analysis), generalization (wider application of findings outside the specific case studied) and originality (genuine new knowledge), distinguishes dissertations of this type from the thesis of the pure research degrees.
Where the elapsed time made available for a postgraduate dissertation is rigidly controlled, students need to plan their project very carefully and in particular should avoid being over-ambitious. There are, however, some courses which permit a student to take much longer over their dissertation (possibly to include an additional two or more years of part-time study). Although the dissertation may then be more substantial due to a greater opportunity to collect data, the probability of completion can be reduced by the competing demands of employment and remoteness from an academic environment.
Master's degrees by research theses
The requirements for the successful completion of projects in a master's degree by research will vary institution to institution. Always be sure to check and make sure you understand the regulations of your institution. In particular, the degree of originality needed and the extent to which generalization of the results is possible may be unclear. Typically, the contribution to knowledge of a master's thesis should be of some significance, particularly in view of the fact that it is likely to serve as a reference work.
Externally, assessors will give attention to the thoroughness of the research as indicated by the bibliography, in addition to the analysis, conclusions, and the standard of presentation.
Doctoral degree theses
This is the highest level of student research activity and, although students may proceed to careers in research itself, the doctoral project will probably be the last occasion when they are formally assessed on the grounds of both research competence and originality. The major aim is to present a thesis for external assessment which will prove to be satisfactory in both respects. A subsequent aim may be, through publication, to become recognized as an expert in the field of study chosen. The requirements are, inevitably, more demanding than those of the master's degree by research. For example, the University of Manchester's Ordinances state that:
"the degree of Doctor of Philosophy (PhD) is awarded by the University in recognition of the successful completion of a course of supervised research, the results of which show evidence of originality and independent critical judgement and constitutes an addition to knowledge".
The achievement of a doctorate in any subject will represent a major investment in terms of time and effort. Usually, the process requires at least three years and there is no guarantee of a successful outcome.
Projects and the part-time student
Part-time students devote part of their time to academic study and the remainder usually to employment. Predominantly, they are managers or professionals who wish to add to their career potential by supplementing a qualification at degree or equivalent level with a higher degree.
Although there is a significant trend towards the inclusion of formally assessed projects within first degree programmes, which has exposed students to aspects of the research process, an equally important trend has been the growth of master's projects undertaken by part-time students in employment – again, notably, the MBA and its derivatives. The dual status of student and employee provides opportunities not previously available, as the researcher might well be in a position to move beyond recommendations to assuming responsibility for change within the organization. If this is the case, action research, rather than applied research is in prospect.
Whilst positive advantages can be created by part-time study, difficulties can arise, often created by the absence of a high measure of contact with faculty and fellow students. The motivation needed to sustain research effort over, possibly, a five-year period, has to be of a high order. Access to facilities and supervisors is much more restricted than is the case with full-time students and the value of self-management of the research project is therefore greater.
Timing and quality
At any level, the two key factors which must be borne in mind are timing and quality. In some cases, the time constraint is inflexible. Failure to hit the deadline means you don't get the qualification. This means, to avoid completing something on-time but sub-standard, that you are going to have to create a carefully drawn-up plan – and stick to it.
Please note – always read and make sure you understand your university's regulations regarding late submission.
Timeliness and quality may sometimes be seen to be competing elements for students. Be warned – that's not how your examiners will see it! Both are expected.
This article is adapted by the authors from: The Management of a Student Research Project, J.A. Sharp, J. Peters and K. Howard, Gower Press, 2002.
Buy this title here on Amazon.
Topic selection for research degrees
Suggestions for research topics may arise from the following sources.
- Theses and dissertations.
- Articles in academic and professional journals.
- Conference proceedings (including online discussion groups) and reports generally.
- Books and book reviews.
- Reviews of the field of study.
- Communication with experts in the field.
- Conversations with potential users of the research findings.
- Discussions with colleagues.
- The media.
The type of study being followed will affect the extent to which students use the sources listed above. PhD students must expect to cover most if not all of them as it is vital to establish that their work is original. At other levels convenience and access are likely to dictate the action taken.
Sources 1 to 5 imply access to relevant literature. For most students this will normally be obtained through a high-quality library –- the quality being measured in terms of the library's access to literature and the ability of the library staff to advise on use of the library resources, including online databases.
All theses, many dissertations and other student reports will generally contain suggestions for additional research. Journal articles sometimes include recommendations for further work and, as they are reasonably up to date (appearing a year or so after the completion of a study), should be given careful attention by the researcher. Reports, particularly of government sponsored bodies, although often the outcome of protracted enquires, are usually published with some speed. Again, these often contain recommendations on which research can be based.
Books give a detailed account of particular fields and consequently will figure prominently in a researcher's studies; books do, nevertheless, possess the disadvantage that they are not as up to date as the other written sources mentioned, and their contents may have become known to other researchers. It will be noted that the list above includes book reviews as well as books. The reviewers of a book are usually able to evaluate the extent of its contribution to knowledge and can provide a useful service for students seeking ideas for topics. Reviews of the field of study provide a very useful guide to the researcher as to what is known in the field and what are still matters of conjecture; and of what has been discovered by previous researchers and where research is needed. There seems to be a definite trend with the proliferation of research subjects for academic journals to publish review articles by leading experts in the field.
Sources 6 to 8 require rather more initiative than does a mere search of the literature. The notion that research can be pursued from behind a desk may appeal to some students but whatever the field, at research degree level, much advantage may be gained from discussions with others. Active researchers are usually sympathetic towards students who are undertaking studies in an area of mutual interest. Ideas for research can sometimes be tested during a brief conversation on the telephone or at conferences and seminars but ideally an appointment should be sought where potential topics can be discussed more fully. These comments apply with particular force to doctoral level students who are able to identify individuals from other institutions or organization who are obviously leaders in their field. In these circumstances a journey of some distance may well prove to be a highly useful investment.
The growing practice of online conferencing can assist students who wish to expose their thoughts on prospective topics to those researching round the world in specified fields. Although there will be some circularity in the sense that the nature of the research will not be defined until the topic has been selected, the field in which the student is working will probably favour certain categories of research which might be linked to potential users. Thus, a research student in biochemistry might make contact with the research departments of companies manufacturing pharmaceuticals, or an engineering student could initiate discussions with a company making hydraulic valves.
Much may be gained when a few idea have been generated by discussing them informally with colleagues (work colleagues, fellow students and staff). In this respect the advantage of working within a research group is obvious and the need for greater initiative on the part of the lone research student is highlighted.
The media should not be ignored as a potential source of topics. Researchers are disinclined to publish their findings until they have been sufficiently substantiated but newspapers, popular journals, and radio and television may report on research progress which is felt to be of general interest. Additionally, findings may be reported by the media perhaps 12 months before scholarly accounts appear in learned journals. Students in the social sciences in particular may through awareness of the media be able to identify issues within which research topics might be located.
Under-focus and over-focus
Students often need specific methods that will guide them in the topic selection process. These methods should be capable of providing useful guidance for two very different types of student: the under-focused, whose ideas of a subject area are not specific enough to form the basis of a viable topic and the over-focused who has a single-minded aim of pursuing a particular topic.
Under-focused students need ways of refining rather vague and often somewhat grandiose notions of a research area. The methods that are useful to them are those enabling them to identify a researchable "niche" which they are capable of exploiting with the time and resources at their disposal.
The over-focused student might seem, by contrast, to have the ideal attitude for successful research. It must be remembered, nevertheless, that research is a specialized business and that it is by no means unknown for research students who have a clear idea of the research they wish to do to find, belatedly, that it has been done already, or is not feasible, a fact of which they are unaware simply because of unfamiliarity with the frontiers of the subject.
Though "relevance trees" originated in the field of research and development management, their attraction is that they are excellent models of one of the ways people think about problems. Essentially, a relevance tree suggests a way of developing related ideas from a starting concept. To be most effective, the starting concept should be fairly broad. The relevance tree then serves as a device, either for generating alternative topics or for fixing on some "niche".
The example shown starts from the broad area of "Demand for transport" the researcher first identifies two major factors affecting it, "Need to travel" and "Individuals' ability to afford travel", which in turn can be related to "income" and "cost of travel".
The first factor splits again into "Leisure journeys" (also affected by the ability to afford travel) and into "Work journeys". Determinants of the latter factor are seen to be "Location of work" (where it is carried out), "Location of people" (where they live) and "Work activities". This last set of variables might suggest to the researcher a possible topic, namely the extent to which changes in the forms of work activity, for example those brought about by the introduction of intranets and extranets into large organizations will affect in the longer term where people live and how far and how frequently they travel to work.
An example of a relevance tree diagram is given below:
This article is adapted by the authors from: The Management of a Student Research Project by J.A. Sharp, J. Peters and K. Howard, Gower Press, 2002.
Buy this title here on Amazon.
Feasibility of proposed research
There is little purpose in attempting a full evaluation of a topic unless the research to which it leads is feasible. The student should therefore consider the following factors
- Access to and availability of data and information.
- Opportunity to pursue a particular research design.
- The time needed to complete the research.
- The technical skills needed.
- Financial support.
- The risk involved.
The six factors may be relevant to all levels of research and the first two can be insurmountable obstacles.
Access and availability
The first factor may be exemplified by a student who has selected as a research topic variation in manufacturing costs in different countries within multinational car firms. Some car companies may be prepared to state in which countries costs are particularly high but it is unlikely that sufficient companies, if any, would disclose detailed data. This may be an obvious example but students should satisfy themselves that there is reasonable prospect of access before proceeding further. A more difficult assessment for the student to make at this stage is whether data or information which will be essential to the research actually exist. Planned approaches involving the analysis of secondary data (which are not gathered by the researcher) will be impossible if the data have not been recorded or are unreliable.
Opportunity for research design
The student may be inclined towards a topic in which a laboratory features and in many cases, such researchers will be able to set up experiments in their own institution's laboratories. If, however, the student intends to conduct a field experiment, some degree of cooperation will normally be needed. For example, pricing policy for books may well be felt to be a suitable topic for research but it would be unlikely that a publisher could be found who would agree to handing over control of pricing to a student. Similar difficulties may be encountered in attempts to arrange a survey. Thus, a researcher may wish to study member/officer relationships within a particular local authority. Before proceeding further, permission must be sought to approach the subjects of the investigation. To attempt to proceed without approval is likely to lead to complete frustration of the plan.
In some instances, research is designed without proper thought being given to the time needed for its completion.
For example, the development of an anti-corrosive paint could be an appropriate outcome of a research study but the time required to assess the anti-corrosive properties added to the development period may far exceed the time available to a student researcher. Very often the student's department will have recognized the problem and broken down the task into a set of consecutive projects each of which can be undertaken by a different student. Where this is not the case the student will need to consider whether it is necessary to undertake another project. Another question is whether too much is being attempted within the time available for research. When the topic has been selected, much is to be gained by drawing up a research plan which will indicate whether the deadlines can be met.
It is reasonable to assume that student research at any level is more likely to succeed if the topic chosen will utilize skills and knowledge already possessed. There would, for example, seem to be greater prospects of satisfactory completion if a student with a first degree in physics were to research in this subject rather than in botany. In some instances, however, it is impossible to avoid having to acquire new skills if a topic is to be researched effectively. This is particularly so in the social sciences where skills often require to be developed in statistics, mathematics and computing. Students should, therefore, consider very carefully whether the topic chosen matches the skills they possess or will have time to develop during the course of their study. If doubt exists, the supervisor should be able to give guidance.
Much research has foundered because of a lack of resources. In student research where the prime resource is the individual concerned, it is essential that the financial support needed for the study should have been resolved before work is started. The student's budget will include funds to support normal research expenditure which may involve travelling, purchase of books, cost of transcriptions and so on. Many sponsored students are, unclear as to what expenditure will be covered by their sponsor but this is unlikely to cover the purchase of expensive equipment or materials, or unlimited travel and subsistence, and may not even extend to postal questionnaires. Before a topic is finally selected, therefore, the question of cost must be examined. Inability to undertake certain activities due to a shortage of funds may prejudice a potentially successful study if the problem is not anticipated and resolved at the outset.
The risk involved
As a final point, the student should consider the risk that for any reason the project will prove impossible to complete. Some types of research are inherently more risky than others; the student should, therefore, at least decide whether the risk of the project proposed is acceptable.
These comments on the six factors have been made particularly with full-time research in mind. They apply with even greater force to part-time researchers with the one exception that in longitudinal studies involving an evaluation of some phenomenon over time the part-timer may have an advantage because of the longer duration of the study.
This article is adapted by the authors from: The Management of a Student Research Project by J.A. Sharp, J. Peters and K. Howard, Gower Press, 2002.
Buy this title here on Amazon.
The approach to research
One way of thinking about research is by the major research approach used. Approaches usually encountered in student research are the laboratory experiment, the field experiment, the case study, and the survey.
The laboratory experiment is relevant to all the major research subject groupings (with the possible exception of the humanities) but is primarily used in physical science, life science and engineering research.
In the context of research methods a field experiment suggests that an investigation subjected to certain controls is conducted in non-laboratory conditions. For example, a new detergent may have been developed as a result of laboratory research and a field experiment may be set up to see how well it works in actual use.
The case study is often the basis for student projects, particularly in the social sciences. In this type of research students may spend a period in an organization and the comments and conclusions which emerge will be based solely on their experiences in that setting
There is some connection between the survey and the field experiment in that techniques relevant to the latter may be used in the former. However, whereas the field experiment implies controls and need not necessarily involve people the survey is viewed separately here as a method of extracting attitudes and opinions from a sizeable sample of respondents.
The purpose of research
A research project has many different purposes. Four common ones are:
- to review existing knowledge;
- to describe some situation or problem;
- the construction of something useful;
The review of existing research findings is a very common type of student research project, particularly in diploma, undergraduate and taught master's courses. It can provide excellent research training with the added advantage that it requires little by way of resources save access to the relevant literature.
Although descriptive research may appear to be less demanding than other types this is often far from the case. However, due to the lack of knowledge of a subject or research methods, or both, it is quite possible that the purpose of a student's first study will be to describe something, particularly if there has been little previous research in the field.
The construction of something which is useful is an outcome of research which increasingly is being favoured by sponsors. In business and management, the physical sciences and engineering, students may be recruited to pursue a particular line of research such as the construction of a new type of optical system, or data analysis relating to a particular market or customer group.
Explanation is the ideal of all professional research workers. It is only when causal rather than statistical relationships are identified that generalizations may be made or laws formulated.
The nature of research
Basic research is concerned with the development of theory, without an attempt being made to link this to practice. The findings are usually reported in learned journals.
Applied research might take the outputs from basic research and seek to draw general conclusions about the prospects for application. Academics will be interested in the findings, but so too will be relevant professionals working in industry, commerce or government who will wish to evaluate the potential for transfer of the findings to their own settings. This category also embraces research undertaken with a specific practical objective in mind.
"Action research", as its name implies, leads to change, and the researcher is a participant in the change process rather than an observer of it. Clearly, this will demand skill and experience which many students will not possess.
The process of research
Deeper understanding of research will come from consideration of the process by which it is conducted and, of course, from embarking on an actual study.
Despite the wide variety of field, purpose, and approach, some common features of the research process can be identified, and if a student departs significantly from a general systematic approach the research will be inefficient and quite possibly ineffectual.
This article is adapted by the authors from: The Management of a Student Research Project by J.A. Sharp, J. Peters and K. Howard, Gower Press, 2002.
Buy this title here on Amazon.
Gathering data to an adequate standard
It is important that the researcher demonstrates that the data were properly collected. Ideally this means that others working at the same level would have been able to arrive at the same readings or observations. Where primary data are concerned this is usually not feasible. Instead, researchers must settle for following a procedure that will be adjudged adequate in the light of the level of their research; in particular by the examiners to whom the research report will be sent for assessment. This question is taken up later in the broader context of the assessment of the research report. The following checklist of points can be used to secure adequate data gathering standards for both secondary and primary data. These are that:
- The data actually measure what they purport to measure.
- Proper attention was paid to measurement error and the reduction of its effects.
- A suitable sample was used; in particular that:
it provided a basis for generalization; and
that it was large enough for the effects of interest to be detected.
- Data were properly recorded; in particular that:
the conditions under which the data were gathered were properly noted; and
that suitable data recording methods were used and efforts were made to detect and eliminate errors arising during recording.
Not all of these points apply in every situation, and the full list is perhaps only appropriate in the situation where data are to be gathered in some systematic way and are of the nominal, ordinal, interval or ratio type. The researcher's own notes, which we view as textual data and an important data source, would probably need to be judged only against standard 4 above. Nonetheless, the list will now be reviewed point by point, with the greatest emphasis being placed on data recording.
Ensuring the data measure what they purport to measure
Very often it is difficult to measure the actual variable of interest and instead surrogate measures may be adopted. This is particularly likely to be a problem in secondary data gathering where the researcher may not know just how the data were derived.
Errors in measurement
Quantitative data are often subject to measurement error and the size of that error may have important implications for both the way the data are used and for the scale of the data gathering effort.
Aside from errors due to malfunction of measuring equipment which are of no interest in this context, error may take the form of bias: as in the under-reporting of small company activities in many official statistics; deliberate or instinctive falsehood, as in many answers to questionnaire surveys; or distortion of one form or another, as in the response of a laboratory amplifier to a high-frequency signal.
The practical implication of all three possibilities is the same: information is lost and the data do not fully represent the phenomenon under study. Though it is often easier for the engineer to overcome such difficulties by employing more sophisticated measuring devices, similar opportunities may well arise in the social sciences.
Another form of measurement error that is relevant to quantitative data is pure random error that is supposed on average to fluctuate about zero. Since this is relatively easy to cope with statistically, it is the usual (though not always the most accurate) model of error adopted.
In practice we can attempt to deal with measurement error in one of two ways. The first is to measure the phenomenon of interest by several different methods. Where each gives rise to random measurement error, a combination of the measurements can be expected to give a better estimate of the true value, provided the methods are not subject to the same error. Obviously this approach requires more data gathering effort but has much to commend it in fields where accurate measurements are difficult. It is, thus, much used in social science research often under the term of "triangulation".
The second way of dealing with measurement error that is more or less random is to increase the size of the sample, and this is discussed below.
Choosing the sample
Data gathering normally involves some kind of sampling. The conclusions that can validly be drawn from the sample depend critically on both the population sampled and the procedures used for generating the sample. The first step in choosing the sample is, accordingly, to choose a target population to be sampled that permits interesting conclusions to be drawn and to select a sample in such a way that the conclusions are valid. Though this is unlikely to be a problem for the physical scientist it certainly is in many other fields, particularly the social sciences. Very often, the sheer cost of data gathering pushes the student in the direction of some "convenience sample" that meets neither of these criteria.
Any statistical method requires a certain size of sample to have a reasonable probability of detecting an effect of interest and in these circumstances the collection of enough data may be quite beyond the resources available to the student researcher. Some social science projects, for example, are very unlikely to produce the hoped for results because they are not based on enough observations. In such cases a large sample is needed if the effects are to be revealed. A crude rule of thumb applicable in a number of situations is that the sample size needed is proportional to the square of the accuracy of the estimates derived from the sample. Thus, to double the accuracy it is necessary to increase the sample size fourfold.
There are many different procedures that can be used for sampling and the reader should consult a specialist text for further details.
Recording the data
An important aspect of the experimental design model is the idea of factors and, by implication at least, the values of all factor levels should be recorded along with the actual measurements of interest. This provides protection against the discovery that further variables, and therefore measurements, are relevant to the phenomenon in question. Equally, notes on the sources of data and time and date of collection can be extremely useful when, many months later, the researcher is attempting to correct an error or to decide whether a set of figures whose origin has long since been forgotten can be used in analysis. In both cases the recording of adequate additional information will help to ensure that few data that have been collected will prove to be unusable.
Researchers do waste effort by having to repeat data gathering activities because certain information was omitted originally. In practice, it is almost always straightforward to collect additional measurements, when the initial data gathering takes place. In further discussions of data recording it will, therefore, be assumed that consideration has been given to exactly what data are to be recorded and the focus now will be on how to record them.
In primary data gathering, recording may involve two processes. Firstly, data must be captured in some way that is feasible in the context in which they are to be gathered, following which it is often necessary to transcribe or convert the data into a form suited to computer input.
The main concern here is the reduction of data to a form suitable for computer analysis. Ratio and interval scaled data are already in this form and present no problem. Ordinal data can either be input as ranks, or equivalently, using letter codes – for example, A=1, B=2. Pictorial data need to be converted into numbers in some way or other; the most usual method nowadays being by the use of a digital scanner. We note though that digitized picture data, although numeric, is not suitable for most analytical purposes; a picture generates too much numeric data. Nominal data may be recorded by using the number 1 to denote the presence of some attribute, (for example, the item is green), or zero if it does not possess it. For pure textual data there is little choice but to input them as they stand.
Transcription may also be the major process involved when secondary data are being used. This two-stage process is at best somewhat inefficient and at worst may introduce errors at the transcription stage, so automatic data gathering methods that collect the data directly in a form suitable for computer analysis have obvious attractions. To that end it is normal to use a digital scanner in conjunction with a text reading package to transcribe secondary data that exist only on paper, such as tables in books. When this is done, however, it is rare that transcription will be completely accurate especially with older documents produced in non computer typefaces. It is, therefore, necessary to undertake a careful correction process (which is much facilitated for prose by running the electronic text through a spell checker). Such processes do not, however, find all the errors without considerable effort. If the text is one that is useful to other researchers it may well be worth putting it on the Web. Such scholarly generosity has been the source of much material on the Web, especially in the arts and humanities.
The detection of errors at the data capture stage may, by analogy with data processing terminology, be dubbed validation. The ensuring of accurate transcription will similarly be referred to as verification.
Validation is primarily based on identifying implausible data: for example, a questionnaire that records a pregnant man or more typically, but more subtly, one anomalous liberal response from an individual amidst a host of authoritarian ones. Not all anomalies will, in fact, be errors and, conversely, such procedures will not identify data that could be correct but in fact are not. Successful validation is heavily dependent on experience and this is one reason why training in the use of data gathering techniques is necessary.
Verification lends itself to more mechanical methods. The traditional approach in data processing, for example, is for two different people to enter the same data into the computer system and then accept the two sets of data if they are the same but otherwise to examine them for transcription error. This approach relies on the reasonable assumption that the same mistake is unlikely to be made by two different individuals. However, the student researcher is unlikely to be able to afford to pay for this type of verification which is increasingly confined to large-scale professional surveys so needs to think of ways either of approximating to it, or better, improving the quality of data entry.
The rejection of data at the validation or verification stage is a somewhat negative process. Though transcription errors are usually remediable, validation errors will not be unless thought is given to making them so. The only way in which this can be done is to introduce redundancy – that is, extra information – into the data gathered so that incorrect or missing data can be reconstructed. If, for instance, the aim is to measure a length, one way is to measure it in millimetres and record it. If this is done incorrectly, however, the complete set of measurements related to this length will have to be thrown away. On the other hand, if it is also measured in inches, any error will be evident when that measurement is converted to millimetres. This example also throws light on the role of "feel" in validation. Many people in the UK and USA have a far better intrinsic concept of imperial measurements than metric ones and a check of this sort will accordingly have a good chance of detecting the error at the time when the measurement is made.
A related issue is the need to check what units a variable is measured in. The loss of a Mars Lander because NASA produced measurements in feet and inches which were assumed by European collaborators to be metric illustrates the point neatly. However, more subtle versions of this problem exist: time series of, say, office space construction may have switched from being recorded in square feet to square metres.
In many social science applications it may well be possible to approach the respondents again; and in science and engineering studies the measurements can, in principle, be repeated. Nonetheless, both of these approaches require effort and in some cases may for all practical purposes be impossible. Therefore, if the researcher is to avoid throwing away hard-won data it is advisable to devote a little thought to how errors in them can be detected and eliminated.
Though the avoidance of error is a common theme in all types of data capture or transcription there are many different methods that can be used for either or both of these purposes. These differ in the amount of equipment and preparation required to use them, in their costs and in their suitability for dealing with large volumes of data. Though the division is far from being clear cut it is useful to distinguish between methods that are primarily suited to data capture and those that are mainly used for transcription, and that approach will be followed here.
The one form of data that will be gathered by all researchers is their own research notes which are worthy of more attention than they are often afforded. Though the researcher who has pursued an almost uninterrupted academic career should have developed effective note-taking practice this may need amendment when, as is often the case, the research is concerned with a new field of study. The problems of the part-time researcher or of someone returning to academic study after a number of years are likely to be greater.
The basic problem with research notes is that they arise from a variety of activities, from the researcher's own reading through to occasional flashes of inspiration. Usually, they eventually comprise a huge mass of data of many different types. Furthermore, there is no simple way of ensuring that two different pieces of data that should be juxtaposed will be.
As far as notes on books are concerned the most effective practice is probably to make them as the books are read and to produce photocopies of selected passages of particular interest that can be annotated as the student wishes. The selections should rarely amount to more than a few per cent of the work in question unless some form of textual study is being undertaken, so there should be no problem with copyright law. Certainly it is usually a sign that the researcher has not digested the contents of a work if it is found necessary to copy most of it, and proper cross-referencing soon becomes impossible if the practice is repeated wholesale. Moreover, problems of copyright law and – even more importantly as far as academic institutions are concerned – plagiarism, are likely to arise. An alternative approach that avoids these difficulties is for the researcher to compile notes as the text is read.
In normal circumstances, researchers will want to produce substantial notes of their own relating to projected analyses, organization of the research report and so on. These are usually more easy to deal with. For the unexpected insight it is worth carrying a pocketbook, notebook PC or electronic organizer in which sufficient information can be jotted down to enable the idea to be properly worked up later into notes.
Logbooks and journals are the simplest method of data recording available to the experimental scientist or the researcher conducting a field study. Their use is relatively straightforward and is often facilitated by employing a standard layout for each type of observation to be made. Appropriate blanks can be photocopied to be filled in and filed in a binder as required. It should be noted that, nowadays, the logbook will frequently be on a PC, since even in the field, notebook and sub-notebook computers provide a more convenient and flexible way of recording notes and observations. As research is a learning process some students find benefit in viewing the logbook as a chronological record; not a day-to-day diary but a record of key incidents. Examples of such would be: opinions expressed by others during the study; sudden insights gained; and more effective ways of conducting the research. Notes of this type could trigger action and, in some fields of study, assist in the production of the research report.
Interview notes and similar materials are rather more difficult to structure because it is hard to predetermine the course of an interview. Nevertheless, there will in most cases be an interview schedule listing those topics to be covered and this may well serve as the basis of a data gathering instrument with half a page, say, being allocated to each subject heading.
With a little experience it is usually possible for researchers to generate their own "shorthand" for recording, thus enabling them to come nearer to a verbatim record. In view of the high information content of pictorial data it makes obvious sense for the researcher to record data in that form, where possible.
Questionnaires provide a more structured approach to gathering data of this type. Where closed questions, (those which provide for only a limited list of responses) are used, subsequent transcription is particularly easy. It pays to design them from the outset with processing in mind if it is intended that they should be analysed by computer eventually.
Tape or digital recorders are generally acceptable in most interviewing situations subject perhaps, to certain parts of the interview being "off the record". If using a tape recorder, it may well be worth carrying back-up tapes and batteries, and both "narrow" and "wide angle" microphones, so that the most appropriate type can be selected.
One aspect of tape recording which is frequently overlooked by student researchers is the cost of transcription. Six to eight hours transcription per hour of tape recording may well be needed. Moreover, it needs special equipment and is best carried out by experienced staff.
For these reasons, and to cope with the situations where recording is not acceptable, the student still needs other methods of recording. Though the act of taking notes can be useful in pacing an interview, the ideal method is one that the student can carry out while still looking at the interviewee. At a minimum this will usually require some sort of shorthand or code with the ideal being the ability to recall every detail of the interview (remembering that non-verbal behaviour is often very important) an hour afterwards. It is useful to write notes on the interview as soon as possible after it has taken place.
Lightweight video cameras are easy to use and it is a straightforward matter to produce digital video that can be stored on a hard disk. Given the advent of data compression formats such as JPEG for video and the size of modern hard drives, substantial amounts of video material can easily be stored on a PC. As noted above, however, applying computer analysis to such video images remains, in general, a difficult task.
A special type of video image that is useful to many researchers, especially those involved in information systems research is the image of PC screens. These can be stored as a video image or in the case of web browser screens as hypertext markup language (html). This type of data has the further advantage that software packages exist that enable the interaction of a user with a website to be analysed.
Buy this title here on Amazon.
A review of common purposes of analysis
This section deals with short summaries of common purposes of analysis, their aims, and both qualitative and quantitative techniques which can be used.
These will be discussed in turn.
Description involves a set of activities that are an essential first step in the development of most fields. Students who can identify a topic about which little is known, of whose importance others can be convinced and for which data can be collected may need to do little other than record them to have their work adjudged satisfactory. Usually, however, knowledge is not so rudimentary and structure must be put on the data by developing or inventing concepts or methods of classification.
In order to make any sense of data we need concepts that enable us to focus on those factors and measurements that are relevant to the field of study. In essence, a concept is a useful idea with a name and concept formulation is thus intimately associated with the idea of language. A concept to be useful must ideally satisfy a number of criteria. It must be unambiguous so that it is possible for different workers to agree whether or not it applies in any given case. Other workers should find it natural to use, and it should be unique and not merely a new name for a concept that already exists in some other field.
Relatively little work seems to have been done on how to effectively formulate concepts from textual data. It seems likely, however, that an important method is the use of analogy, for example, the application of the notion of "half-life", derived from atomic physics, to the declining usefulness of subject-specific journals over time.
Another very powerful way of developing concepts is the use of pictorial data. The need for the concept of "crater" in discussing the moon is obvious. In general, seeing provides a very powerful way of getting at concepts, for which reasons many statistical approaches present the results in pictorial as well as numerical form.
It is often the case that the researcher wishes to identify concepts in textual data derived from journal articles or transcripts of interviews or transcripts of the discussion of a "focus group" on the area of interest. Traditionally this has been done by reviewing the different documents and attempting to find common concepts. This may mean allowing for the fact that different authors or interviewees use different terms for the same concept: for example, some may talk of "new technology monitoring" while others talk of "identifying technological opportunities" while the researcher is happy that for the purposes of the research these can be considered identical.
A variety of statistical methods are available as aids to concept formation. These are all based on the underlying notion that good concepts are those which enable us to find differences among the objects under study. They accordingly attempt to tease out the major sources of differences in the data to form prototypes, at least, of useful concepts.
One of the oldest sets of statistical techniques are those of factor analysis. These take a number of measurements for each object, for example, psychological test scores, and attempt to select from them a few combinations of these measurements that explain most of the differences between individuals; the so-called "factors", which can then be named. This approach is applied, for example in intelligence testing, to identify the factors of spatial ability and verbal ability.
Another approach is to compare objects according to how similar they are and then group them in descending order of similarity. This is the basis of cluster analysis. Thus, we might consider grouping people in the light of similarities in educational qualifications. A number of groups would then be expected to emerge naturally: for example, in the UK, those educated to postgraduate level, to graduate level, to A level, and so on.
The successful development of concepts results in a number of "pigeon holes" into which individual objects can be classified.
The easiest type of classification procedure to adopt is one that assesses each of the objects to be classified in the light of the concepts to be applied and then assigns it to the category which it most nearly fits. Thus, someone who spends five hours a week playing football for a payment of £50 to cover travel expenses might be assigned to the category "part-time footballer" or "amateur footballer" rather than "professional footballer" if the concepts relevant to this classification decision are "hours spent playing or practicing football" and "amount of income derived from playing football".
An obvious refinement of this idea is to weight the different concepts according to their importance. Thus, we might assign a weight of 100 to hours per week played and a weight of 5 to amount of money received and then classify as a professional footballer anyone scoring more than 4,000. With this weighting scheme our footballer would receive a weighted score of:
100 x 5 (hours/week) + 5 x 50 (£ /week) = 750
and therefore be classified as an amateur footballer.
Rather than derive the weightings subjectively it is often useful to calculate them using the statistical technique known as discriminant analysis. The essence of the approach is simple: a number of objects that have already been classified are taken. Measurements of a number of variables for each of them are used to set up a set of predictor functions that will enable future cases to be classified by using a weighted combination of the relevant measurements.
The ability to evaluate literature is a skill which has to be developed as students progress through any field of study. Students pursuing studies in humanities or fine arts will in addition be able to draw on the apparatus of critical analysis to which they will almost certainly be exposed during their studies.
The aim of content analysis is to put qualitative data into a more quantitative framework. It was originally devised by political scientists for the interpretation of official texts but it can also be used for, say, the analysis of tape recordings from discussion groups.
The essence of content analysis is to:
- identify the target communications;
- identify a number of dimensions of the subject in hand;
- go through each communication assigning statements to it to one or other of the dimensions;
- count the number of times each dimension is addressed in each communication.
Thus, for a study of the impact of the concept of global warming on, say, the public consciousness in the UK, the target communications might be decided to be the quality newspapers over a specified period of time. A number of dimensions would suggest themselves from the literature, discussion with experts and so on, including issues like the impact on agricultural yields; the impact on biodiversity; the impact on low-lying countries, etc. It is then a straightforward matter to examine the newspapers concerned.
It will nevertheless be appreciated that, firstly, such a process can be very time-consuming and, secondly, that it is likely to be expedited considerably by confining attention to those newspapers to which access in electronic form can be obtained. In the latter case, analysis can be considerably expedited by the use of PC-based packages for qualitative data analysis such as NUD.IST.
Often students may wish to describe how different individuals or organizations view a particular situation, e.g. in the field of conflict studies. The technique of causal mapping allows their views to be described pictorially in terms of the cause and effects seen as operating by them.
Construction of measurement scales
A frequent purpose of analysis is the construction of a measurement scale of the interval or ratio type; such a need occurs often. For example, a host of different variables such as height and weight can be measured for a building but none of them provides a direct measurement of the attribute "earthquake proof-ness".
Traditionally, the approach to the problem of constructing scales has been to adopt surrogate measures: for example, the use of performance on standard flame tests, as a measure of fire resistance. Scales based on the weighting of a number of different attributes or values (as discussed in our footballer example above) have become an increasingly popular alternative: for example, in assessing the fire risk of buildings.
At a more sophisticated level during the past thirty years, however, a host of techniques have been developed, by psychologists in particular, for constructing scales for which no "obvious" measurement exists. These may be single (unidimensional) scales which measure a variable along a single dimension, for example, the measurement of an "intelligence quotient". On the other hand, multidimensional scaling techniques apply to situations where more than one concept, or dimension, is relevant. In the case of two, or more dimensions, such techniques result in a "perceptual map". To revert to our earlier example of the footballer, it might well be that in this case it may be sensible to measure interest in football along two dimensions: firstly, the average hours per week spent playing football; and, secondly, the proportion of the individual's income derived from football. Such a representation would obviously provide a richer measure of the variable of interest than a unidimensional scale.
Of recent years, the analytic hierarchy process has attracted considerable interest as an approach to the construction of scales representing subjective judgements as to the relative importance of objects of interest, for example, the benefits from investments in different computer systems, or the preferred locations of businesses.
Application of a methodology
A type of research project that has grown in importance of recent years is one that requires students to demonstrate that they can apply some methodology successfully. A major reason for this is that the assessment of such an ability is more easily done through a project rather than a conventional examination.
Analysis in the context of applying a methodology requires the student to do two things:
- explain the application of the methodology clearly;
- make it clear they understand the conceptual and theoretical bases of the methodology.
In many cases there will be added a third purpose, to evaluate the methodology. It is by no means easy to achieve these aims. Many student engineering projects, for instance, consist of little other than a series of printouts that reflect a mechanical application of the methodology rather than any deeper understanding of it.
Many student research projects involve the design of something. Engineers may design test apparatus or prototype new products, architects a building complex, while health administration students may design new appointments systems. Like the application of a methodology this is a category of research project that has grown in importance over the past twenty years because of the unsuitability of conventional written examinations for assessing design knowledge.
Design involves a creative element. Traditionally inclined research students are unlikely to find themselves involved in design. With the growth of new forms of doctorate, e.g. in the visual arts, even doctoral students may find that a major purpose of their research is design.
Generation of empirical relationships
This section is concerned with the identification of regularities and relationships amongst them. This is an area in which research students can often expect the major results of their analysis to fall. Whilst the sciences and engineering pay considerable attention to the derivation of empirical laws this is less so in the social sciences and humanities. Relatively little seems to have been published on suitable techniques. All in all this area seems underrepresented in texts on research methodology, given its importance in much student research and for this reason will be explored at some length here.
The essence of the problems dealt with in this section is that there is usually no obvious idea of what relationship will be found, and the richness of the data need to be displayed in such a way as to suggest fruitful avenues to explore; pictures often provide a good way of doing this.
The recognition of pattern and order in data is a fundamental step in the development of theories to explain them. The commonest quantitative approach to such pattern recognition is by the use of correlation methods. Patterns may conveniently be broken down into three basic types:
- Those showing association among variables.
- Those showing groupings.
- Those showing order or precedence relationships between variables.
Association between two variables is very easily detected using a scatter diagram in which one variable is plotted against another. The quantitative equivalent of a scatter diagram is, in many cases, the correlation coefficient, for which there is an extensive statistical theory.
Equivalents of the simple correlation coefficient exist for the case where there are more than two variables. A useful measure is the partial correlation coefficient that measures the strength of association between two variables when the effects of another variable on both of them are allowed for. This is useful, for instance, in cases of spurious correlation such as the relationship between the number of pigs in the USA and US output of cars, both of which are substantially associated with the third variable, US gross national product per head. Partial correlation methods are in turn closely related to those of path analysis, discussed later.
Obviously, grouping techniques are closely related to the classification problem discussed earlier. In the present case, however, our interest is in situations in which the number of classifications (if any) is unknown. For this reason, methods such as cluster analysis are appropriate whereas discriminant analysis is not.
Precedence relationships are a type of pattern that occurs in many different contexts. They show order, precedence or priority. Perhaps the simplest instance is one where we search for a pattern in some sequence of activities, for example, in detecting a "typical" pattern of community growth from hamlet to city.
The quantitative approach to detecting sequences of this type is by the use of cross-correlation coefficients that measure the strength of the relationship between one variable and another a specified number of time units later. They are much used in econometrics and in control engineering for detecting the lag between a change in one variable and the corresponding change in another, for example, an upsurge in orders and the consequent upsurge in deliveries.
In qualitative analysis, pattern recognition is far more likely to be achieved through the application of some suitable theory. The obvious examples here are various theories of history that seek to explain the evolution of societies, of technologies, of movements, etc. A further area of application is in tracking the influence of say one artist on other artists. There are two broad patterns here. The first is where one artist influences another, e.g. Le Corbusier's influence on later architects; (here we are concerned with tracing links that go only one way) the second is where artists influence each other, e.g. the members of the Cubist movement in the early 20th century (where the links run both ways).
Derivation of empirical laws
In many fields of technology it is possible to develop empirical laws in the form of simple equations relating one interval or ratio-scaled variable to a few others. Since it is generally possible to find simple relationships between variables there is a long tradition of using graphical methods for their determination, particularly where detailed theoretical knowledge is lacking. Such laws are of considerable practical use in engineering and for researchers in associated fields. To them the methods described may well be very familiar. In the main, however, the picture has been very different in the social sciences. The general belief has been that there is no reason to expect that relationships between variables will be simple and, therefore, there is scant point in trying to establish them using graphical techniques. Accordingly this section is primarily intended for researchers in this field.
It is perhaps worth beginning by suggesting why simple laws can be found in the physical sciences and the circumstances under which it might be worthwhile looking for them in the social sciences. Basically simple relationships seem attainable in the physical sciences because they typically describe the behaviour of many millions of entities: for example, where we relate the maximum safe load on an embankment, comprising billions of molecules, to the angle its sides make with the horizontal. This suggests that success in finding simple empirical laws in the social sciences is most likely in fields where the behaviour of a very large number of objects is being described. Thus, as an example, in a number of countries the proportion of companies above a certain size, as measured by turnover or manpower, can be well described by a standard statistical distribution known as the Pareto distribution or 80/20 rule.
Searching for empirical relationships is best done using certain tricks of the trade. By far the most important are the use of various types of scale that lead to straight line graphs, since a straight line relationship is much easier to fit by eye and lends itself to unambiguous extrapolation.
The starting-point of any study of relationships between two variables will be, then, the graphing of Variable A against Variable B on simple linear scales, that is, the construction of a scatter diagram. If a reasonable straight-line fit is found, no more need be done. Otherwise the next step must be to apply non-linear scales to one or both variables. Graph papers are available with a variety of different scales. Indeed, a glance at the catalogue of a specialist supplier can in itself be a useful way of deciding possible non-linear relationships to explore. Similarly, the manuals for computer spreadsheet packages contain a variety of examples of the different scales provided and examples of their use.
Explanation and prediction in quantitative analysis
Traditionally in the Anglo-Saxon world, knowledge and research have been equated with the identification of causal relationships, and research directed to this end has been accorded the highest esteem. Many fields have not yet been developed to the level where causal explanation is possible or valid predictions can be made. These offer their own special research opportunities as has already been discussed. Nonetheless, in science and technology where sufficient knowledge exists to make explanation or prediction possible it is not easy to see what benefits would be attained if they were not attempted. In this sense causal explanation and prediction must be seen as involving a higher level of knowledge - though, of course, not necessarily a higher level of research skill.
The meaning of the notions of "cause" and "causality" have exercised philosophers for at least three centuries and continue to be the subject of lively debate. Experience suggests that, whereas epistemological considerations of this type are usually too time-consuming for the researcher involved in a short project, students undertaking a research degree will often find it necessary to give thought to these matters at some stage in their work.
In practice, this involves the interrelated activities of causal explanation and prediction which are often couched in terms of hypotheses: for example: "the existence of a close-knit Quaker community was an important factor in the early development of the iron industry" (implicit explanation); or, "the falling costs of computer hardware have made software costs a more important factor in developing a computer system" (implicit prediction). Since most tests of hypotheses appear to fall into one or other of these categories and the logical and statistical methods required are the same as those needed for explanation and prediction, they will not be discussed independently.
In what follows, explanation and prediction will be construed as enabling the values of one set of variables to be derived given the values of another. Thus biochemists may direct their efforts to explaining why the body rejects certain types of foreign tissue. Better explanations of tissue rejection in turn enable better predictions to be made about the likelihood of rejection given various forms of treatment. Equally, an important test of a theory is that it makes predictions which can be confirmed by observation or experiment. In pure science, then, explanation and prediction are intermingled. In fields such as engineering this may also be the case but the fact that research is often directed towards the formulation of empirical laws on which predictions can be based means that there also exists the possibility of successful prediction for which no satisfactory explanation can be given. Thus, in hydraulics it is possible to apply standard formulae to relate the flow of a river to its gradient and depth but to give a satisfactory explanation of the basis of the formulae may not be possible.
This type of situation illustrates another facet of the interrelationship between explanation and prediction; that is, in practice, we often use the same method for testing out whether we can satisfactorily predict a phenomenon as we do to establish whether we can successfully explain it.
In many social sciences, explanation is often deemed impossible because of the complexity of the systems involved. Frequently, the task of social sciences is, therefore, presented as finding associations between variables that can be generalised to various situations, for example, that urbanization leads to a growth in reported crimes. Though a variety of explanations of this phenomenon have been offered by criminologists, sociologists, and so forth, none can be said to command general acceptance. Nonetheless, such an association is useful, if it can be established, because it forms the basis of prediction.
Several techniques which are relevant to explanation and prediction will be examined briefly.
The technique of loglinear analysis explains the variations in probabilities of class membership. For example, the probability of a person being convicted of a crime before the age of 25 might be explained in terms of variables such as sex, socioeconomic status of the individual's family, highest educational level attained, etc.
The experimental design model, as well as its applicability in analysing many different types of research data, has considerable virtues as a conceptual model of quantitative research directed towards explanation and prediction.
Fundamental to the model is the notion that the variable of interest can be measured on a ratio or interval scale and that the values of the variable to be explained or predicted are affected by a number of other variables usually referred to as factors. Each factor takes on more than one value and each value is called a factor level. Factors may often, however, be measured on nominal scales, for example, fertilizer A, fertilizer B or represent fairly crude groupings – for example, application of less than 100 grams of fertilizer per square metre, application of more than 100 grams per square metre, and so on. Finally, we assume that for each combination of factor levels we have at least one measurement of the variable whose value is to be explained or predicted. The model assumes this value is made up of a number of components: a base value plus various additive effects due to each of the factor levels and also due to interactions between factor levels, plus finally a random term representing errors in measurements, the effects of factors not considered directly, and so on.
The virtues of the experimental design model as a conceptual model of the processes involved in explanation and prediction are very considerable even if, for whatever reason, no attempt is made to carry out a statistical analysis. It offers an explanatory framework which is capable of handling complex relationships between the respondent variables and factor levels along with predictions of the effect of any particular set of factor levels. The factor levels can be recognised as independent variables and the implicit requirement of the model that there be at least two levels of each factor enables their effects to be isolated. The experimental design model assumes that the researcher can control the experiment to the extent of selecting the factors and factor levels whose effects are to be examined. This is, of course, not always the case in the social sciences but it may still be possible to approximate to an experimental design by using the fact that particular variables vary between one organization or country and another or over time.
The regression model has the attraction that it deals with situations where there is no control over the selection of factor levels. In principle, it expresses a dependent variable y in terms of various independent variables x1, x2 on which the value of y is supposed to depend, and so on, the precise form of the relationship being derived from the data. Obviously, this model represents a generalization of the experimental design model since x 1, x 2 etc. can represent combinations of factor levels/treatments and may be nominal, interval or ratio data. The particular advantage of the regression model is that it does not require observations to be available for specific factor combinations and to a large extent, then, it is capable of utilizing the data "as they are". On the other hand, this usually means that the rigorous control implicit in experimental design is lost and so the researcher cannot always have the same faith in the results as when an experimental design approach is feasible.
A useful extension of the regression model, is that of path analysis. In essence, this attempts to select the set of relationships between variables that is most consistent with the available data. As such it has an obvious bearing on the problem of distinguishing independent, dependent and intermediate variables. As a typical example we might consider two possible explanations for the strong correlation between father's social status and son's social status that is observed in many Western countries. The simple explanation is that the father's status determines the son's status directly. A less obvious explanation is that the father's status determines the level to which the son is educated and the level of the son's education determines his status. Path analysis enables a choice to be made between such competing hypotheses.
Explanation and prediction in qualitative analysis
By comparison with quantitative analysis, explanation and prediction in qualitative analysis are clearly different. They typically involve the close analysis of textual and other material, e.g. multimedia databases in graphic design, using theoretical constructs and frameworks derived from the humanities and social sciences.
Very often, e.g. in studying political systems, explanation and prediction are most easily approached through comparative analysis. For example, if we find that a particular industry is dominated by large companies in both a developed country and in a developing country, this suggests that ownership has little to do with average incomes or other variables that differ between the countries. On the other hand, if both countries have stock markets on which company shares are traded internationally, it remains a possibility, to be investigated further, that the structure of the industry may be affected by its ownership. In effect, this enables the student to have many of the benefits of the experimental design model discussed below.
Some research students find themselves working in the broad area of policy analysis. The aim of their research is either to carry out an evaluation of the effects of past policies and draw lessons from it (evaluation research) or to formulate, and argue the case for, new policies. Much social science research is of this type as, less obviously, is a considerable amount of research in technology. What differentiates this type of research from those discussed hitherto is that it is primarily aimed at non-academic audiences. Nevertheless, though it has an obvious 'political' dimension it must still meet academic standards, and since such research clearly involves explanation and prediction, the relevant standards are those pertaining to those topics.
Theory generation/construction of a shared language
The highest level of explanation and prediction is that of theory generation. As such, it is most likely to be undertaken by research students. Theories have an important characteristic that we wish to emphasise because it is important to student researchers at all levels. Good theories provide a common language which can be used to discuss a particular field. They, therefore, offer an important aid to researchers both in formulating their research questions and in carrying out their analysis.
Buy this title here on Amazon.
Preparing for the viva voce or oral examination
The doctoral student (and possibly master's student) will have to undergo a viva voce or oral examination (the two terms are normally used interchangeably). This, along North American lines, is often viewed as a "thesis defence". Doctoral students must always have this in mind during their writing and their draft chapters should include all supportive evidence needed. The defence of their argument should be concerned with the methodologies which were employed, the value claimed for their findings, and their recommendations for future work, and not with explaining and justifying the omission of corroborative material which should have been included in their thesis.
However long it may last, the oral examination will only require a fraction of the time that the research project as a whole will take. Nevertheless, it is far from a formality and should be prepared for thoroughly with a view to reinforcing the good opinion that the examiners should have formed from the study of the written report.
The wise student will also, at the earliest possible stage, have done some homework on the examiners and will attempt in the writing of the report to accommodate the implications of any preferences and attitudes which they are thought to hold. It is not too cynical to suggest either, particularly at doctoral level when the examiner will be an expert in the field, that references to the examiner's own work should be made. The academic world is, of course, well known for its conflicts of opinion on topics; students should accordingly do their best to ensure that there will be no antipathy towards them simply because of the line of argument they have pursued or the way they have presented it.
Students should, therefore, attempt to place themselves in the position of the examiners and consider the type of question which may be put in order to evaluate the report. To provide a systematic basis for anticipating how their research may be evaluated, a number of questions under each of eight criteria are posed here which the research student should seek to satisfy. In relating it to their own situation, students may find it useful to remember that for higher degrees individual examiners may well seek the advice of colleagues on particular aspects of the research outside their own sphere of interest or understanding.
Evidence of an original investigation or the testing of ideas
- Was the aim of the research clearly described?
- Were the hypotheses to be tested, questions to be answered, or the methods to be developed clearly stated?
- Was the relationship between the current and previous research in related topic areas defined, with similarities and differences stressed?
- Are the nature and extent of the original contribution clear?
Competence in independent work or experimentation
- Was the methodology employed appropriate? Was its use justified and was the way it was applied adequately described?
- Were variables that might influence the study recognised and either controlled in the research design or properly measured?
- Were valid and reliable instruments used to collect the data?
- Was there evidence of care and accuracy in recording and summarising the data?
- Is evidence displayed of knowledge of and the ability to use all relevant data sources?
- Were limitations inherent in the study recognised and stated?
- Were the conclusions reached justifiable in the light of the data and the way they were analysed?
An understanding of appropriate techniques
- Given the facilities available, did it seem that the best possible techniques were employed to gather and analyse data?
- Was full justification given for the use of the techniques selected and were they adequately described? In particular were they properly related to the stated aims of the research?
Ability to make critical use of published work and source materials
- Was the literature referenced pertinent to the research?
- To what extent could general reference to the literature be criticised on the grounds of insufficiency or excessiveness?
- Was evidence presented of skills in searching the literature?
- Was due credit given to previous workers for ideas and techniques used by the author?
- Is evidence displayed of the ability to identify key items in the literature and to compare, contrast and critically review them?
Appreciation of the relationship of the special theme to the wider field of knowledge
- Was the relationship between the current and previous research in related topic areas defined, with similarities and differences stressed?
- Was literature in related disciplines reviewed?
- Was an attempt made to present previous work within an overall conceptual framework and in a systematic way?
Worthy, in part, of publication
- Was the organization of the report logical and was the style attractive?
- With appropriate extraction and editing could the basis of articles or a book be identified?
Originality as shown by the topic researched or the methodology employed
- To what extent was the topic selected novel?
- Was there evidence of innovation in research methodology compared with previous practice in the field?
Distinct contribution to knowledge
- What new material was reported?
- To what extent would the new material be perceived as a valuable addition to a field of knowledge?
- To what extent do the conclusions overturn or challenge previous beliefs?
- Were the findings compared with the findings of any similar studies?
- Was the new contribution clearly delimited and prospects for further work identified?
- To what extent does the work open up whole new areas for future research?
Students should rehearse their answers to an appropriate selection from the above list of questions. This procedure should indicate what additional evidence will need to be taken into the examination. In the main, any supplementary material will relate to the data gathering and analytical phases, but may also include papers which students have written during their research.
Whatever the level of the examination it should go without saying that students if called upon should be able to defend, explain, elaborate, or even apologize for any part of it. In the last mentioned respect, tolerance which may be extended towards the undergraduate is unlikely to apply in the case of the doctoral student. If an unacceptable weakness is found by such a student after a thesis has been submitted criticism is best anticipated and coped with by preparing a typed statement for distribution in advance of the examination.
The oral examination
Though practice varies depending on the level of the research project the oral examination will almost always involve at least two examiners. Usually, there will be at least one external examiner present for a postgraduate project and this may also be the case at undergraduate level.
In most UK universities the supervisor will not be appointed as the internal examiner for doctoral theses. Undergraduate or master's students may, however, find that their supervisor happens to be functioning as an internal examiner and this will lead to a definite difference in attitude to that to which the student has been accustomed. At doctoral level in most UK universities the role of internal examiner is taken by another member of staff perhaps with the supervisor's formal role being that of "in attendance". In addition, for a research project that has involved collaboration with outside bodies, various people not on the staff of the institution may also function as examiners.
The prime purpose of an oral examination is to satisfy the examiners that the report presented represents individual or acceptably collaborative effort. If collaboration has been involved evidence of the degree of cooperation will be considered.
The main concern of the examiners is to ensure that any claims made in writing can be justified and that the analytical methods used are understood. In part, the examiners are adding credibility to the report by approving it, particularly in the case of the research thesis which will then be included in bibliographies. During the examination, however, students should expect to have to express opinions on topics which they may feel are peripheral to their studies in order to convince the examiners of their expertise in the wider field which includes their area of study. Although well able to defend their written arguments, uncertainty as to where the discussion might lead may give cause for prior concern. In particular, the student's craftsmanship and honesty are to some extent on trial as well as the merit of the research report itself.
As far as preparation is concerned individual students must decide what best suits them. It is obviously sensible to set aside a period of time beforehand so as to put oneself into the right frame of mind for the oral examination and reacquaint oneself with the details of a research report that may have been completed a significant number of weeks before. To provide such a period may cause difficulty for part-time students or those students who are now working in a full-time job. Perhaps the best mental preparation of all is for students to be in a position to exploit the strengths of their writing and to pre-empt criticism of its weaknesses. Obviously, the advice of a friend or colleague or even better a rehearsal for the oral examination can be of great help here. Doctoral students must remind themselves that insofar as their conclusions are concerned they should be original. They should have developed considerable expertise in their chosen topic area and to be able to defend their thesis from a position of some strength. They must, however, expect questions which probe the scope of the topic, the nature of the target population, the type of cross-sectional comparison selected, and so on, at whatever level is appropriate for the type of research that has been conducted.
With regard to the examination itself possibly the most important advice that can be offered is that students should not attempt to pull the wool over the examiners' eyes. Very rarely will it be possible to get away with this in front of experts.
Initially at least, the meeting will be conducted by the examiners with the student playing a very reactive role. At this stage it is important that students answer concisely but completely the questions put to them by the examiners, since the nature of their replies will be taken as a guide to the way in which the research itself was conducted. Therefore, where they are uncertain as to exactly what is meant by a question, students should request further clarification before attempting to answer. Nor, where the question is difficult or subtle, should they hesitate to reflect so that they can give a considered reply.
Though examiners are likely to concentrate on what they perceive to be the key strengths and weaknesses of the work in question, various examiners may differ as to what these are. Accordingly, students must anticipate that discussion will range widely and that questions will be posed on many different aspects of the research.
Buy this title here on Amazon.
Citation and referencing
A research report differs from many other forms of writing (a newspaper article, for example) in that it should make clear what material and ideas have been originated by you, and what is owed to the work of others. At most levels of research, it is important that you show you have understood other peoples' ideas. You can do this by demonstrating the ability to summarize and present them within your own framework This means that under most circumstances the amount of direct quotation should be fairly small.
Quotations are extracts (usually short) from someone else's work. References give source information for quotations or ideas. These are discussed in some detail below. A bibliography is a list of all books consulted (but not necessarily referenced). The level of degree you are working towards will govern how full and comprehensive a listing of all relevant literature in the field should be. At doctoral level, you are normally expected to demonstrate that they have understood the body of relevant literature in full.
Obviously, there are situations where direct quotation is necessary. For example, a study of the impact of Kierkegaard on twentieth-century writers on existentialism would be strange indeed without substantial quotations of Kierkegaard himself and sections of text from later authors that appear to have been influenced by him. Equally, much theory in applied mathematics, say, would be foolish to rewrite in a different notation.
But it would not be acceptable to produce a research report dominated by quotations from various authors glued together by an occasional sentence supplied by the researcher.
Quotations should be properly differentiated from the main body of the research report. Indented, single-spaced text is perhaps the easiest way of clearly differentiating the longer passage. Quotation marks are generally used for a short passage or a single sentence. Typeface variations such as italic (for example), may also be used. In either case, the work from which the quotation is drawn should be clearly referenced as discussed below. Institutions often give guidelines for quotations similar to the above, which should, of course, be followed.
Where other authors are drawn on for ideas rather than direct quotation, things can be a little more difficult. Many ideas are in the public domain. You should avoid such as:
"Most chairs have four legs (Adam, 1775; Chippendale, 1778), Similar tendencies have been noted in tables (Hepplewhite, 1782; Sheraton, 1804)".
The rules about referencing other work to some extent depend on the customs of the field in which you are writing. On the whole, a sensible approach would be to reference only those ideas which an inexpert reader might think were your own, even though they are not.
In such cases you should try to give the original source of the idea and also the place where it was found. For example, if you have become familiar with information theory and the work of Shannon through reading someone else's introduction, the following practice should normally be used:
"As Brillouin's (1962, pp. 13-20) account of Shannon's (1948) work shows, the theory of information has much in common with ideas from fields of physics such as thermodynamics".
You should treat the matter of citation with considerable care. An examiner is likely, for instance, to consider that a student who consistently attributes ideas to later references rather than the person who actually originated them, has conducted an inadequate literature search. Thus, in the example above, attributing Shannon's theory to Brillouin may give a poor impression.
A poorer impression still is created by students who use the ideas of others without attribution. Indeed, if this is done by directly using someone else's writings without due attribution this constitutes plagiarism, which all academic institutions consider a serious offence against academic regulations.
Citation style must comply with whatever standards are prescribed by the institution. As far as citation in the body of the text is concerned there are two broad schemes in use: the Harvard method (last name of author plus date) or the numerical approach (also called the Vancouver method) in which each reference is given a specific number. The Harvard approach is simpler, and permits the insertion or deletion of references at will, whereas in the latter case any references which are introduced or deleted at a later stage will necessitate all of the subsequent numbers being changed. The Harvard method is predominantly the one required by institutions and publishers. In some cases a research institution will also ask for the page number of the work cited to appear in the text reference (name/date/page(s)) if it can be specifically attributed to a page or pages.
In a reference, emphasis may be given to the idea or to the author, as:
"... a strong claim is made that a system of equity must exist within every firm (Smith, 1999, p. 63)" (idea emphasis)
"... Smith (1999, p. 63 ) has argued that every firm should have within it a system of equity" (author emphasis)
In both these cases Smith's initial has not been included. If this might lead to confusion – if there were two or more Smiths working in the field – an initial or first name should be included:
"... Alan Smith (1999) has argued ..."
"a system of equity must exist within every firm (Smith, A., 1999)"
If an author has written two works in the same year, these are usually distinguished by an a or b suffix:
"Smith (1999a) has argued that every firm ..."
"Societies institutionalize ethical systems through legal frameworks (Smith, 1999b)"
These should be similarly cited in the reference list at the end of the report (see below).
Some institutions specify that references in the text should always be complete in themselves, with name, date and page information, even if that leads to repetition of a name already mentioned:
"Alan Smith (Smith, 2003, pp. 26-36) has argued ..."
Otherwise it would be normal to complete the reference information from the text flow:
"As Alan Smith (2003, pp. 26-36) argued ..."
Numbered references (the Vancouver method) appear in the order of appearance, i.e. the first reference in a work is numbered "1", the second "2" and so on. If reference 1 is cited later in the work, it still carries a number 1:
"Smith continues and develops the argument proposed by Martyn"
With a numerical approach, it is better to make reference to a name as well, where possible. For example:
"Smith argued that every society should have within it a system of equity"
is preferable to
"It has been argued that every society ..."
Whatever reference style is adopted – stay consistent. Follow the institution's guidelines.
Always read and follow an institution's guidance notes on reference style, and if in doubt, ask your supervisor for clarification.
References within the body of the text confirm that students are able to relate their thoughts to the body of knowledge. These references must relate exactly to the list of references at the end of your report. Examiners usually scrutinize the list of references very carefully, and will be critical of lists which are not comprehensive, accurate and well presented.
If the numerical approach is adopted, the text and reference can be linked easily in either direction. One limitation is that a reader may not be able to establish quickly whether a particular author has been referenced. As numerical lists run in number order, this means that listings of authors will not be alphabetical, and an author with several different (non-sequential) citations will appear in different places in the list.
With the Harvard name/date approach someone reading through the list of references will not be able to turn immediately to that part of the text to establish what ideas have been taken from the author in question. The alphabetic listing does, however, allow the reader to see very quickly and easily which authors (and which works of these authors) have been cited.
A standard procedure should be adopted for references. A commonly employed approach is:
- for books – author and initials (in capitals), year of publication (in brackets), title of book (underlined or in italics, with edition if any), page numbers, publisher, place of publication (if known):
Smith, A. (2003), Exploring Business Theories (2nd Edition), pp. 61-63, Rothwell Press, New York
- for articles – author and initials (in capitals), year of publication (in brackets), title of article, name of journal (underlined or in italics), volume number, issue, page number:
Smith, A. (2003), "Systems of equity; a new paradigm", Journal of Business Theory, Vol. 22 Issue 3, p. 42
No standard protocol for website referencing has yet been widely accepted. It is sensible to treat web references as nearly as possible like textual references. Again, one consistent style should be followed, which will be the one prescribed for textual references by the institution. Page numbers would not be cited. The important issue is to provide a reference which can be located as easily as possible in the reference list and which can be followed by other researchers if desired.
A website reference would make reference to the author if known, or the title of the site if not. It should always include the date accessed. The reference in the reference list should cite the URL (unique resource locator) in full. It is usual form never to cite the http:// part, but always to cite the www. part:
Text: "The approach made by just-desserts.com (2004) is of particular interest here".
Reference list: Just-desserts.com (accessed 24 April 2004) The Perfect Tiramisu, www. just desserts.com/tiramisu
Students should note that website URLs may change, and a website reference will not usually provide as permanent a reference source for future researchers as will a text reference.
Certain Latin words and phrases encountered in scholarly works can be useful in referencing. These are normally italicized, as are other foreign language phrases not in common currency.
Sometimes you may wish to quote something which contains an obvious grammatical, typographical, or numerical error. In this case "(sic)" typed as here within brackets and placed immediately after the error will demonstrate that the error was from the original source and not the writer's error ("Vice President Quayle's often-quoted 'Potatoe' (sic) reference was widely seen to be ...")
Other Latin abbreviations are:
Et al.: "and the others". This allows multiple authors (Smith, Kelly and Robinson, 1998) to be referenced as (Smith et al., 1998). Et al. is normally used beyond two authors – Smith and Kelly would appear as (Smith and Kelly, 1998).
Ibidem (abbreviated to ibid.): "in the same work". This allows successive reference to the same work without repeating the name each time. This replaces author details in the immediately preceding reference, but should be followed by page number if appropriate:
Owen's later work was strongly challenged by Fowler (1998, p. 34). Fowler further makes the point that culture involves a change in the individual's position within a firm, not just to the individual in isolation (ibid., p. 38).
Opere citato (abbreviated to op. cit.): "in the work cited". This requires the author's name and page number, and refers to a work already cited (but not a sequential reference):
Fowler (1998, p. 34) disagrees strongly with Owen's later works, particularly his "Magic and Ritual" (Owen, 1966), a position also endorsed by Heskey and Murphy (2001) and Hamman (2003). Fowler further makes the point that ritual involves a change in the individual's position within a society, not just to the individual in isolation (Fowler, op. cit. p. 38).
Loco citato (abbreviated to loc. cit.): "in the place cited". This is used with the author's name and is similar to op. cit. but is more precise as it refers to the same passage in a book already cited.
Fowler (1998, p. 34) disagrees strongly with Owen's later work. Fowler further makes the point that culture involves a change in the individual's position within a firm, not just to the individual in isolation (Fowler, loc. cit.).
Doctoral theses often contain several hundred references and the work involved in ensuring that these are systematically presented and are error free can be formidable.
References to reports with unknown authors
Reports should be referenced as with authors except that the title of the report, in full or in clearly understandable and identifiable abbreviated form, should feature in the text and the reference list instead of the name of the author. Thus:
"The population of Germany grew by 6.2% between 1988 and 1998 but is forecast to grow at only 1.1% between 1998 and 2008 (OECD Population Report, 2000)".
You should make the compilation of a final bibliographic listing as easy as possible by keeping notes in proper and consistent format as you go. As new references are added, a bibliography also will need to be extended.
Bibliographies should always be alphabetically listed, using the surname of the first author. Microsoft Word's Table – Sort AZ command is useful to order a list alphabetically so long as it is prepared in the right format in the first place (last name, initial, etc).
Buy this title here on Amazon.
Editing and proofreading
Editing is designed to ensure that the research findings are written and presented in as effective a manner as is possible. Editing will include the task of incorporating additional and newer material into the body of the text and revising the structure as appropriate.
A good essay, report or dissertation should be easily readable. But readability does not in itself guarantee quality of argument. Both need to be addressed in the editing process.
Monroe, Meredith, and Fisher (1977) suggested that good style emerges from naming, predicating and modifying. The first two of these suggestions are the basis of the "core" elements of a sentence, which comprise the subject, verb and object. The core is then expanded by adding modifiers:
- "... inflation is created" (the core idea);
- "... inflation is created by excessive wage demands" (the core idea has been modified);
- " ... inflation is created by excessive wage demands rather than by an increase in money supply" (the modifier has been modified).
With each sentence being made up of a core idea and modifiers, Monroe et al. then proceed to recommend a procedure for editing:
- Underline the core elements of every sentence. Find the subject, the verb, and the object of the sentence. If there is more than one subject. verb, or object, underline all of them.
- Look for the core idea. Make sure that the main idea is expressed in the core of the sentence and that this basic idea makes sense by itself.
- Check for modifiers between the core elements. As a general rule, ten or more words between the subject and the verb is too many.
- Look for misplaced modifiers. When you come across a potential problem, draw an arrow from the modifier to what it modifies. A long arrow means that you should rework the sentence.
- Check items for precision. Have you previously defined the specialized terms? Examine all words representing qualitative judgements. Check to make sure that you have used these correctly, that the reader will understand what specific qualities are being summed up in each word.
- Avoid ambiguity. Rather than referring to the "Health Service" stipulate the "UK National Health Service".
Editing the full report
You will eventually have assembled all sections of your report. Students with supervisors should have obtained regular guidance from them and this guidance will be reflected in the writing. Neither student nor supervisor will, however, be able to assess the whole report until the last section has been written. It is important that plans for final editing are clear.
You should not expect more than one careful full reading from your supervisor. This should occur at the well-edited draft stage, in what you consider final form, with no obvious errors or gaps. If editing alone is insufficient at that stage, and additional analysis is called for, project completion may be delayed significantly.
In the absence of a supervisor the whole responsibility rests with the student although it may be possible to find an acquaintance (including a staff member) to assist.
In a research degree, although a thesis will be examined as an exclusively individual effort (unless collaborative work is acknowledged) a bad fail will reflect adversely on the student's supervisor. This would be the case particularly if it is evident that little guidance has been given. If, however, a poor piece of work, supposedly in near-final form, is submitted to the supervisor, he or she will probably communicate this fact to the institution or to an external examiner (particularly if the supervisor believes the relationship to have been a troublesome one).
In short – if you mess up –- you probably can't blame it on your supervisor!
The task of proofreading is the final stage of writing before the report is bound into its covers. In publishing, a "proof" is original text which has been typeset and output for checking. Proofreading is the process in which the typeset material is checked for deviation from the original copy.
There are two main types of errors:
- Systematic errors: are consistent misuse of, for example, apostrophes or consistent misspelling of an author's name.
- Random errors would be, for example, the attribution of an article to the wrong author or a one-off misspelling.
Provided that you are aware of systematic errors and how to correct them, these are readily removed from a word processed text by a simple global edit. Most random errors that involve spelling mistakes will be detected by a spell checker (though care still needs to be taken with word pairs such as "practice" and "practise" and grammatically logical sentences containing incorrect, but not misspelt words, will not be picked up). It is, therefore, useful as a preliminary to proofreading to spell check the text. A spell checker alone is not sufficient.
Some further points to be noted on spell checkers are:
- Packages may be geared to different spellings (such as US or UK versions of English). Most institutions are willing to accept US spellings providing that they are used consistently. Students may need to obtain a US dictionary, if they have learnt US rather than UK English.
- Students should be able to add technical terms, and so on, to the thesaurus used by the spell checker.
Every student who writes a report for presentation should endeavour to ensure that, at least in the typographical sense, the report is perfect.
A word of caution –- proofreading one's own work is notoriously difficult. It is a much better idea to have someone – it can be a friend or colleague, rather than a professional proofreader – do it for you. Some useful rules for people who will be proofing their own material are:
- read each line in turn;
- recognize that intense concentration is needed and break off every few minutes;
- read aloud;
- take a sample (say 5 per cent) of those pages on which no errors have been noted and re-read them.
Particular attention should be paid to: spelling errors; faults in grammar, inconsistencies, for example, where the same reference is cited with a variety of different dates; and to omissions.
This stage of the writing is tedious but important. In contrast to the irritation created by numerous typographical mistakes, readers and examiners will be impressed by well-presented text which is error free.
Buy this title here on Amazon.