The Life Changes Trust recognises that the level of evaluation should be proportionate to the funding provided. We will work with those we fund to agree on realistic and proportionate evaluation plans that can deliver the information and learning we both require. Proportionality means that evaluation activities should be in proportion to the size of a project, its significance, and the risk associated with it. This reflects an important principle for the Life Changes Trust and other funders – a commitment to minimise any additionalmonitoringand evaluation burden on those we fund and work with. This is why we support self-evaluation and participatory evaluation integrated into delivery.
Evaluation activity requires resources including time, money, and expertise.
It is important that the resources devoted to evaluation activities are in proportion to the scale of the project and the value in knowledge and understanding that will be added by such activities. This may mean devoting greater resources to the evaluation of larger, strategically significant interventions, and taking a relatively lighter touch with smaller projects. This rule may not apply in all cases.
The Life Changes Trust is committed to supporting effective evaluation. The Trust generally expects to allocate a proportion of funding to evaluation activity. In some cases we may make proportionately large investments in evaluation to build an evidence base for the future. In all cases, we recognise that under-resourcing of evaluations jeopardises evaluation effectiveness and therefore the value of evaluation efforts. Further information on Budgeting for Evaluation can be found elsewhere in this guide.
Scope of evaluation activity
The scale, scope, and reach of evaluation activity should again be considered relative to the overall level of funding required to deliver the activity.
The evaluation activity associated with a large and innovative project should always attempt to be reasonably comprehensive. On the other hand, it may be appropriate for a tried-and-tested project working with small group for a short timescale to carry out a lower level of self-evaluation activity.
Choices regarding the scale and scope of the evaluation will influence the choice of evaluation designs and methods used. In certain cases it may be sensible to restrict the scope of the evaluation (including the number of questions, size of thesample, methods of data collection, and analysis options).
Frequency of measurement
Proportionality also relates to what data you collect and how often you collect it to provide evidence for outcomes. Evaluation works best when you collect information (monitor) as you go along. Where possible, this should be integrated into routine delivery activity. This can be simple counting, e.g. how often a beneficiary visits a certain location, or qualitative data from observations after group sessions that are captured by a simple process.
Traditionally, to assess change, you would collect information at two or more points in time, usually at the start and end. A cycle more focused towards ongoing learning and improvement will not identify such rigid distinctions, but will need to identify points to stop and reflect on the information already gathered and what this means for the activity. You must do what works best in your circumstances. Formal measuring at the beginning, middle and end on a short project might be too close together to identify change and potential learning, whereas this may be too infrequent on longer projects.
While we focus on evaluation that happens during the intervention below, there will also be circumstances where other approaches will be appropriate. There are a number of typical evaluation designs that are relevant here:
- During - Collecting information on an ongoing basis during implementation is a way to identify the association between project activities and outcomes and a way to capture useful learning that can feed into improvement.
- Before and after - Participants or situations are looked at before the project and then again after the project, e.g. before and after observations of behaviour.
- At the end only - Carried out as the project is coming to an end, e.g. end-of-session questionnaire. Although common, this lacks reliability as we do not know what things looked like before the intervention.
- Retrospective - Where participants are asked to recall their situation or feelings prior to getting involved and after participating. This approach is only as good as people’s memories, but may allow people some time to reflect on particularly complex or challenging issues.
- Longitudinal - Carried out repeatedly, over time, often for some time after the activity has finished.
It is important to ensure that the time taken up by measurement or evaluation is proportionate for beneficiaries. If possible, this should be agreed with beneficiaries at the outset and can be amended over time. You should also consider how to involve beneficiaries in the cycle of learning, including analysing the data to identify learning and improvement actions.