Early intervention into action: Innovation and evaluation
‘Early intervention into action’ provides case studies of places that have made changes in how they work and put in place an evaluation to help them capture and learn from the impact of those changes.
One of the questions we get asked most often concerns where early intervention has been done well, and how other places can learn from those examples.
The UK early intervention sector is young and maturing quickly, so knowledge-sharing is key. The evidence base for early intervention continues to grow, and lots of places are innovating and experimenting with new approaches. All of these innovations and reforms are, in turn, fantastic opportunities to generate more evidence about what works – what does it take to achieve positive change within local communities and existing local systems?
This ‘Early intervention into action’ section provides case studies of places that have made changes in how they work and put in place an evaluation to help them capture and learn from the impact of those changes.
The evaluation challenge
For EIF, effective early intervention is early intervention that has been evaluated to gauge its impact. We all want to make a difference to children and young people’s lives, but resources are always finite. This is why it is vital that local commissioners and service leaders are able to monitor, test and adapt the programmes and systems that they put in place.
We know that evaluating the impact of decisions and actions in a complex environment can be challenging. The scientific trials and experiments that can be used to evaluate the impact of individual programmes and interventions in isolation, for example, can be difficult to recreate on a wider scale.
But this doesn’t mean that evaluation is impossible, nor that it is merely a nice-to-have. Evaluation of what you are doing today provides vital information for improved decision-making and more effective services in the future.
Putting the right kind of evaluation in place
We have a set of ‘rules of thumb’ that we believe underpin evaluation at the local level. By commissioning or planning evaluations that adhere to these principles, local areas can be more confident that they have put in place a good process, and that their findings are more likely to be reliable, relevant – and in turn, useful.
The same set of principles also underpins the EIF evidence standards, which we use to assess the strength of evidence for individual programmes. While complex evaluations at the local level may not yet qualify for a formal evidence rating – of the kind we provide via the EIF Guidebook – we believe that sticking to these guidelines means that local changes may at least be seen as on the path towards being ‘evidence-based’.
By focusing on examples of evaluation that adhere to these rules of thumb, we seek to provide good examples of local authorities and other organisations that are monitoring the effects of their decisions and actions, and contributing to the wider evidence base on effective early intervention at the same time.
Our ‘Early intervention into action’ case studies
In this section, we tell the stories of local places who have made changes to bolster or expand early intervention in their areas, and who have put in place plans to evaluate the effects of those changes. Local changes may be aimed at improving outcomes for children, young people and families, or at improving the effectiveness or efficiency of local services.
Stories are based on short interviews with individuals connected with the local changes, and on documents relating to their evaluation (evaluation plans and/or reports). The content of case studies has not been independently verified by EIF.
Effective early intervention must be responsive and appropriate to local circumstances, and so our case studies are not templates or models to be replicated. We believe they are interesting, useful, illustrative stories that shine light on the difficult but essential business of evaluating early intervention at the local level.