Quasi-experiments can be a good compromise between experiments and non-experiments. They are more robust than non-experimental evaluations but not as complex, costly or time consuming as experimental evaluations.
Like experimental designs, quasi-experiments involve a comparison between those who receive the intervention and those who do not (i.e. an intervention group and a comparison group). The difference with a quasi-experimental design is that the respondents are not randomly allocated to these groups. The group that does not receive the intervention is called a comparison group (and not a control group like in experiments) because respondents are not randomly allocated to a group.
Before and after intervention with a comparison group
In a quasi-experimental evaluation, the intervention and comparison groups are used but the participants are not randomly allocated to the groups. Allocation to groups could be done simply by picking which individuals go into which group on an arbitrary basis, or through more careful selection such as making sure an equal number of males and females go into each group, as well as an equal proportion of age ranges and ethnicities. The comparison group does not receive an intervention.
As participants are not randomly allocated to groups this prevents the study from being called an experimental design, and precludes use of the term, 'control group', as there is no reassurance that the two groups are equal before being exposed to the intervention. In a control group, because of the randomisation, you can claim that personal characteristics and other attributes, such as level of education, are equally distributed between the intervention group and the control group. Therefore the two groups are the same (termed equivalent) before the intervention begins.
As you will not be allocating people to groups randomly, the next best thing you can do is match the participants in the two groups on relevant characteristics. For example, you could ensure there are equal numbers of those with a similar: number of lessons learner drivers have had, socio-economic status, or age. This ensures that you compare two groups that are alike. The more characteristics the two groups are matched on the greater the chance that differences between the groups found in relation to the desired change (e.g. attitudes), are indeed due to the intervention and nothing else.
To calculate the overall change achieved by the intervention, you deduct the total change found in the comparison group from the total change found in the intervention group:
- (Intervention group, post score – pre score) – (Comparison group, post score – pre score)
- This difference in change between the intervention and comparison group is the sum of the effect of the intervention.
- A quasi experiment will provide useful information about whether the intervention is related to the changes you observed in the intervention group.
- The changes observed in the comparison group can tell you the change that would have occurred naturally, i.e. without the presence of the intervention. For example, as participants age, or have different experiences, their views may change anyway over time. In a quasi-experiment you are looking for the change that occurred purely because of the intervention, and for no other reason, such as ageing.
- This design is easier to carry out than a randomised controlled trial (RCT) and will be the strongest design available when you do not have the opportunity to randomise the participants to control and experimental groups.
- You will need to be able to make sure that the comparison group does not receive the intervention, or something similar, during the period of the intervention. Even if the comparison group simply heard about what the intervention involved, it could affect their thoughts and or behaviour.
- It can be difficult to retain contact with your participants over time and your findings may be skewed depending on the characteristics of those who 'drop out'.
- The pre-test questions may, to some degree, affect the post-test results as the pre-test questions may prime the participants, educate them about the issue under study, or they may think that the researchers are expecting different answers in the post-test.
A Road Safety Officer was frequently asked to visit schools to talk about the school crossing patrol as part of a lesson about 'people who help us'. He wanted to know if this was a good use of his time, and whether to develop a special package of lessons and supporting material for schools to use after the visit. He questioned children, parents and teachers at a sample of schools and found that the lessons were appreciated by most people, especially the children, who could also explain how to cross the road safely outside their school (post intervention).
A small scale, before and after, survey found that children at a local primary school were more likely to use the school crossing patrol to cross the road one week after the lesson on 'people who help us'. However, it emerged that the school had also arranged a number of its own road safety activities at around the same time, including a visit by a police officer, and an assembly to introduce the crossing patrol person to the new intake of pupils.
To decide if it would be worthwhile to continue the visits and develop the support material it would have been useful to also arrange to survey children's road crossing behaviour before and after a similar lesson with a comparison school not using other strategies at the same time, and then to compare the findings.
After intervention only with a comparison group
This involves identifying a comparison group who are not currently receiving the intervention, or who are receiving a different form of the intervention. Post (after) intervention measurements are taken from both the intervention group, and the comparison group, at the same time point. Individuals are not allocated randomly to either group, so a comparison group, which may already exist naturally another class from the same year group, for example), could be identified even after you have begun to deliver the intervention. This type of evaluation should show that the outcomes for the intervention group are different than for the comparison group.
- Using a comparison group gives you some idea of what would have happened in the absence of the intervention.
- The post test only design is sometimes the only practical design where it has not been possible to randomise groups or collect baseline data prior to an intervention being delivered.
- As measurements are taken at only one time point, it is a cheaper and quicker option with data easier to analyse.
- The main weakness with this design is selection bias. Without randomly allocating groups or matching the participants (so there are people with similar characteristics in each group), or a before measure, there is no way of knowing whether the differences seen in the outcome measures between the two groups, were due to the presence of the intervention or to some other factors, such as individual differences or environment.
A member of the emergency services gave a talk, and handed literature, on the impact of drink-driving to Year 10's in one school. In another local school, the literature was given out to Year 10's, but without the accompanying face-to-face talk.
Post test measurements showed that the Year 10 pupils in the school, who received the talk and the literature, were, on average, reportedly less likely to drink and drive than the Year 10 pupils in the school who only received the literature. The intervention was deemed a success and repeated elsewhere.
However, in the school who received both the talk and the literature, a well-known Year 10 pupil had recently been seriously injured in a road traffic accident. Thus, the pupils' tolerance of drinking and driving was already low before the intervention began, and their awareness of consequences high. Without a pre-test measurement, or randomly allocating pupils into groups where one receives the intervention and one does not, it was impossible to know what, if any, effect the intervention actually had. This also highlights the need to use a comparison group which is as similar as possible to the intervention group.