Experimental designs are evaluation methods which attempt to prove that a particular intervention or activity caused a particular outcome. It does this by comparing different cases or groups:
- An intervention group (those who participate in the intervention or activity)
- A control group (those who don’t).
Abaselinemeasure (pre-intervention) is taken to understand the starting point of all groups in the study. After implementation, the differences between the intervention group and the control group are measured. The two groups in the study must be similar to each other, so that differences measured can be inferred as the effect of the intervention, not as a result of group or contextual differences.
Experimental design is used most extensively in the natural sciences (e.g. medicine, chemistry and biological science). It is often cited as providing the best validity, i.e. whether observed changes can be attributed to the intervention and not due to other possible causes.
Statistical significance is a calculation which measures the likelihood of a relationship or an outcome is caused by something, rather than random chance. Tests to determine statistical significance can be done on the difference between the control and intervention group and can tell us the how strong any association there is between intervention and outcome.
Selected methods and techniques
a) Randomised controlled trials (RCTs) are a type of experimental research. They are the most well-known experimental method and often described as the ‘gold standard’ of scientific.
Initially used primarily in the medical field, RCTs have more recently grown in popularity as a way of testing social interventions for their impacts. Participants are randomly allocated and usually without their knowledge to either the intervention or non-intervention group. This aims to remove bias from the process.
In medical (drug) trials those not receiving the intervention are usually given a non-active placebo drug. In the social sphere it may be that one group receives an enhanced intervention while the control group are given the ‘business as usual’ approach.
It is an area of much debate if it is ethical or feasible to conduct such research in complex social and human systems.
b) Quasi-experimental design is research when the two groups are that are compared are non-randomly allocated. For example a comparison group is selected that resembles as closely as possible the group receiving the intervention, i.e. a populations with similar characteristics.
It is difficult to control for differences between two groups (e.g. those receiving an intervention may know and respond to it). What Works Scotland (2015) have suggested utilising Synthetic control methods to tackle this. This approach uses multiple control areas and uses data to build up a picture over time of a more robust control area. This is a new approach and has not yet been used in Scotland. Its use is likely to be in large scale trial work.
c) Natural experiments are where naturally occurring variations across areas e.g. in policies are exploited by researchers to assess the impact if the difference. E.g. the introduction of smoking bans in particular areas are contrasted with others to see if public health differences begin to be seen. As with quasi experimental designs it is difficult to draw clear casual inferences. Natural experiments are observational studies and the researcher has little (if any) control over the social conditions of the experiment.
- Randomised control trials (RCTs) allow you to investigate the effect of an intervention while eliminating some common forms of bias.
- Natural experiments can be a pragmatic, cost‐effective research design if data are already be available for analysis in national data sources.
- Results may oversimplify causation - how can they work with complex interventions in complex social systems?
- There may be limited generalisability. We may only see the effects of intervention on those in the trial.
- Feasibility. Experimental methods can be time consuming and expensive.
- Recruitment and retention of volunteers to studies is costly and can be difficult.
- A high level of technical expertise is required.
- Need to address ethical questions e.g. can researchers justify if valued interventions are withheld from the control group.
- Controlling for differences between the groups is difficult even if they are matched on main characteristics.
A good discussion of the use of RCTs and other experimental designs in the third sector is provided by the NCVO Charities Evaluation Service: Randomised controlled trials – gold standard or fool’s gold? The role of experimental methods in voluntary sector impact assessment.
Better Evaluation provide a brief introduction with examples of RCTs.