Skip to main content

8.3.1.1 Quasi-Experimental Evaluation

The basic structure of a quasi-experimental evaluation involves examining the impact of an intervention by taking measurements before and after it is implemented. An evaluation may examine the impact of an intervention on crime levels or whether an intervention has changed the perception of individuals. The actual process of conducting a quasi-experimental evaluation is often complex as inferring cause and effect is often difficult (Sherman et al, 1997). Evidence should be collected to ascertain whether any change that has occurred is due to the intervention being researched or other causes, known as confounding variables. Sherman et al (1997) developed a 5-point scale called the Maryland Scientific Method Scale (SMS) to evaluate the methodological quality of studies and the authors indicate that confidence in the results is highest at level 5 and level 3 should be the minimum level required to achieve reasonably accurate results. The criteria for each level of the scale are:

Level 1: Correlation between a prevention programme and a measure of crime at one point in time (e.g. areas with CCTV have lower crime rates than areas without CCTV)

Level 2: Measures of crime before and after the programme, with no comparable control conditions (e.g. crime decreased after CCTV was installed)

Level 3: Measures of crime before and after the programme in experimental and control conditions (e.g. crime decreased after CCTV was installed in an experimental area, but there was no decrease in crime in a comparable area)

Level 4: Measures of crime before and after in multiple experimental and control units, controlling for the variables that influence crime (e.g. victimisation of premises under CCTV surveillance decreased compared to victimisation of control premises, after controlling for features of premises that influenced their victimisation)

Level 5: Random assignment of program and control conditions to units (e.g. victimisation of premises randomly assigned to have CCTV surveillance decreased compared to victimisation of control premises)
(Farrington, 2002)

When researching the impact of security technologies it is very difficult to achieve level 5 on the SMS because interventions are often implemented across areas and groups without any scope for random assignment. Research within real world settings has to take account of ethical issues. For example, a researcher cannot dictate whether certain groups experience one technological security sanction (e.g. electronic monitoring) and another group are deprived of the sanction (Finn and Muirhead-Steves, 2002: 309; Bonta et al, 2000a: 324). A range of independent variables have been used to measure the impact of security technologies including recorded crime levels across target areas (e.g. Welsh and Farrington, 2002, 2008), recidivism rates of offenders (Finn and Muirhead-Steves, 2002) and public attitudes (e.g. Gill et Spriggs, 2005d).

One of the main difficulties in conducting real world research is identifying suitable control units. Ideally control and experimental units are identical and then when an intervention is introduced into the experimental unit any difference between the two units can be attributed to the intervention. Problems can occur matching control and experimental conditions especially when examining the impact of interventions within real world settings such as cities and towns. Over an evaluation period inconsistent changes may occur across the two conditions that mean it is no longer valid to compare the two conditions. Security technologies are rarely implemented as an isolated measure and therefore it is often difficult to unpick their impact from quantitative measures taken from experimental and control areas. For example, levels of detected crimes are often used to evaluate CCTV but a range of activities can impact on crime levels, therefore the impact of the cameras can be difficult to isolate.

An important issue that needs to be factored into any evaluation is displacement. An intervention may simply displace a problem to another area or make the offenders change the type of crime they commit, how they commit crimes and/or the times when they commit crimes (Repetto, 1976). Advocates of situational techniques have acknowledged that it is nearly impossible to find evidence showing that displacement has not occurred and this is an inherent weakness of research on displacement (Barr and Pease, 1990; Clarke, 2004). A fuller understanding of displacement effects can be gained by integrating research methodologies. For example, some forms of geographical displacement can be measured by examining crime trends in buffer zones around intervention areas (Gill and Spriggs, 2005) or by looking in detail at offending patterns within intervention areas. Interviewing offenders can provide evidence related to whether interventions cause them to change their offending behaviour either temporally or how they committed crimes.

The Campbell Collaboration (www.campbellcollaboration.org) advocate conducting systematic evaluations on a range of research studies to 'estimate the average effect size in evaluations' (Welsh and Farrington, 2008: 12). This type of research does not produce new empirical data but draws together the findings from a range of studies and is referred to as a meta-analysis.