Evaluating early intervention at the local level: Five rules of thumb
We know that evaluating the impact of decisions and actions in a complex environment can be challenging. However, we believe that by commissioning or planning evaluations that adhere to our five ‘rules of thumb’, local areas can be more confident that they have put in place a good process, and that their findings are more likely to be reliable, relevant – and in turn, useful.
Five rules of thumb
We believe evaluations commissioned at the local level should:
- Be well designed and conducted: This means that all methods are in line with good practice and clearly contribute to assessing the success of the intervention. This includes consideration of how to measure outcomes and create a valid counterfactual.
- Have quantitative data on outcomes as a key component: The evaluation should test whether there is a quantifiable improvement in at least one outcome of interest – for EIF, that means an outcome for children and young people, such as school attendance or levels of anxiety. While input measures (such as number of sessions attended) or qualitative data (such as children or family’s perceptions of the programme delivery) can be valuable for other reasons, they are not sufficient to establish the effectiveness of an intervention.
- Report on pre-intervention and post-intervention outcomes: This means that the evaluation tests whether there is an association between introducing the change and improving outcomes. This is why collecting baseline data prior to introducing the change is essential.
- Use valid and reliable measures of outcomes. This means selecting measures that quantify what they are designed to measure, and that produce consistent results. The use of measures which are not valid and reliable means we cannot have confidence in a study’s findings and conclusions.
- Are transparent and acknowledge the proper limits of the analysis and reporting: This means that all data and methods are clearly reported and the limitations of the analysis and any caveats and assumptions are properly recognised.
These guidelines do not prescribe any particular approach to evaluation or analytic methods, and some evaluations will have greater ambitions and a more demanding specification. However, we believe that they do provide a useful basis for a serious attempt at evaluating impact in a challenging, complex environment.
We have applied these guidelines in the course of identifying case studies for our section on Early intervention into action.