Skip navigation
Blog

Meet the evidence detectives: An introduction to programme assessment

Published

14 May 2018

Lucy Brims provides a first-hand insight into programme assessment, which is a key part of our work at the Early Intervention Foundation (EIF). She sheds some light on the process for programme providers, commissioners and anyone else who is interested in what we do and why.

At EIF, we aim to increase the availability of effective early intervention to improve the lives of children. As a researcher at EIF, one way I contribute to this is through ‘programme assessment’: assessing the evidence underpinning programmes for children and their families. Such programmes can reduce youth violence and improve child mental wellbeing, amongst other benefits. A team of researchers works year-round to provide rigorous and useful summaries of the evidence-base for programmes, accompanied by an easy-to-understand rating of their strength of evidence. These are published on the EIF Guidebook, an independent online resource. We do this to support commissioners to make informed decisions about which programmes to deliver for their local population.

Assessing evidence for programmes is a stimulating but challenging process. Each programme may have many evaluations, each with methodological strengths and weaknesses. The evaluations may be conducted in several countries and show mixed results. So, how do we assess the overall evidence in a rigorous way?

The assessment process starts with scoring the most robust studies against EIF’s 33 criteria (more information about our standards of evidence is available on the Guidebook). This is the long bit. We delve into the detail of differential attrition and the minutiae of measurement validity. In total, the criteria cover four aspects of evaluation quality: design, sample, measurement and analysis. The highest ratings are provided when all these key aspects are satisfactorily met. If bias is introduced through any of these four aspects, we are less sure that the programme itself caused any observed improvements in outcomes. Also factored into the EIF rating is a consideration of the findings of the studies, namely whether outcomes for children who received the programme were positive, negative or null.

As we rate studies against the criteria, we record supporting information and reasons for judgments. Sometimes the information is explicit in the paper. Other times, detective work is required through, for example, examining tables of statistics to work out how many people were in the sample at a timepoint. We sometimes contact study authors if we can’t figure out what happened. EIF has created extensive guidance and examples to support the assessment process, but researcher judgment is required. Even when UK studies are less robust and therefore do not drive the rating, we aim to assess and summarise them on our Guidebook.

How do we ensure we have got the assessment right? Firstly, a senior researcher at EIF reviews the provisional rating to check consistency across programmes. Senior researchers meet with programme assessors to discuss any difficult judgments, such as whether the number and type of people dropping out of the programmes causes serious methodological problems. We also arrange meetings with external experts, usually university academics. Programme assessors present an overview of the programme and the provisional rating initially. Then we receive advice on tricky issues, such as uncommon analysis techniques. These meetings can last hours as we scrutinize every detail of the studies and resolve disagreement through discussion. We also decide upon overall ratings when the evidence is mixed, as EIF director of evidence Tom McBride has discussed.

Next, the evidence summary and rating are shared with the provider. The provider can request a reassessment if they consider that EIF has misapplied its criteria. In such cases, the relevant ratings and contested criteria decisions are reviewed at a meeting of our ‘moderation panel’, a wider group of experts and at least one person from EIF who was not involved in the original assessment. Finally, a ratified rating and key details of the programme are uploaded to the online Guidebook.

All in all, the process has some challenges – after all, we are seeking to translate complex evidence into a single rating. However, it does involve a set of careful checks, balances and expert review. As a researcher, programme assessment is a satisfying and interesting (if rather long) activity. It also provides a meaningful opportunity to contribute towards a rigorous and user-friendly resource in the Guidebook, which is ultimately designed to improve outcomes for children through effective early interventions.