Skip navigation
Blog

Choosing the right measure to evidence the impact of early intervention

Published

23 Jun 2015

How do we know if what we are doing is making a difference? This is a question asked with great frequency to those of us involved in delivering early intervention or early help services.

I had this naive hope that one day I would come across some elixir-like supertool which will do everything I need it to do: support engagement, assess risk, enable collaborative goal-setting, and measure outcomes.

Oh, and it would need to be reliable and valid too.

Through my time spent with EIF, I now know why these two concepts matter. ‘Valid data’ is produced by a tool which actually measures what it is supposed to measure. So for example, if you just directly ask people if they think a service has helped them or made their lives better, there may be multiple reasons why people might answer yes when actually the real answer is no (including hypothesis guessing, evaluation apprehension, wanting to please the researcher or practitioner who asks the question, interaction of different treatments, and more).

I also need ‘reliable’ data, which means the answer doesn’t depend on who is asking the question, and the same answer would be given if the question were repeated.

The best way to get valid and reliable results is to use what’s called a standardised tool (example).  These are tools which have been tested for reliability and validity. It’s worth pointing out that there can also be disadvantages to using standardised measures. They can be unwieldy, designed to measure very specific and narrow factors, and complex to apply. So finding the right approach is not straightforward.

I don’t think I am the only one looking for a simple life. What motivated my search were issues that many others are struggling with – how to free practitioners from some of the burden of record keeping, and simplify processes to make services a bit clearer for families. A tool that could be used across a 0–19 early help service working to a whole-family approach would have been a real find.

Inevitably I was ensnared in the trap of trying to make the various tools I encountered live up to my vision. Eventually each time I was forced to admit that my tool of choice had come up short, unable to deliver across the wide range of functions I had in mind. When I speak to colleagues working in this area they often describe similar experiences in their efforts to make tools perform outside their comfort zone.

Some of the places EIF work with seem to have had similar journeys to my own – and one of the most frequent areas of support requested has been with  choosing appropriate assessment tools or tools to measure ‘distance travelled’ for children and families. The advice I would give is that it is really important to understand the strengths and limitations of any tools you use, and to understand what it is you are hoping to achieve. In this way you can avoid too much disappointment and setting out with unrealistic expectations.

Let me illustrate this point through an example. Across the early help service in my day job at Kensington & Chelsea we decided to use the Family Star Plus as an assessment tool. Initially I hoped that this could be a real all-rounder – providing a visual, accessible, and collaborative framework to work with families across the 0–19 age range. First impressions were great. Practitioners loved it. Families liked it. It looked good. It felt proportionate – the time it took for practitioners to complete and record the star balanced well with the benefits for both the family and the service. It also had a function to record and report on the quantitative values of the star, so rather than relying purely on qualitative descriptions of outcomes, this now gave the promise of providing some quantitative values for analysis.

I was really looking forward to receiving the first quarterly report of star results as evidence that the early help service was making a positive impact.

Then I started to have some doubts. Whilst I very much liked the way the star supported practice to engage families, in its ethos and collaborative approach, I did have some concerns about how I might use the quantitative scores. When it came to choosing a tool I had been ignoring some of the cautionary tales I had heard from those within evidence-aware circles. As tools such as the Outcomes Star rely on relatively subjective practitioner and family perceptions, these do not represent a standardised measure and therefore are less robust in evidencing impact.

Whilst on an individual case level the distance-travelled perceptions of the family and the practitioner has value on a relational basis, these scores do not lend themselves to more objective comparisons of service effectiveness. In other words, if you are looking for a measure that provides a reliable and valid score evidencing the effectiveness of your work with a family or a group of families, it probably makes sense to consider a tool which has a strong evidence base demonstrating standardised measures.

I then turned my hand to developing a programme working with families on the edge of care, where engagement and risk assessment were supported by other processes. I decided to look for a different tool. My aim was to find a measure which helped demonstrate the value of the interventions offered in improving family resilience. The hypothesis was that if it was possible to bring about a strengthening and improvement in the relationships this might improve the family’s ability to manage difficult events and transitions in the future. The tool we settled on for this project was Score 15 – borrowed from family therapy. This is both a tool for measuring and evaluating outcomes and also a tool that can support a therapeutic intervention.

The advantage of using Score 15 is the accompanying evidence that backs its scoring system – providing confidence that any measures of progress and impact are benchmarked and validated objectively. As this is a pilot program to test the effectiveness of an intervention model, the confidence in this measure is particularly important. When trying to understand whether the practice that is being trialled is more or less effective than business as usual, we needed a standardised measure. If the project proves successful it will be important to have a strong and reliable measure to support our claims. That’s not to say it is a perfect fit – as it was developed in a clinical setting, we’ve had to tweak the language to suit outreach (the authors were really accommodating) – and it does presuppose a level of trust in the relationships that we can’t always rely on early on in the process. As each family member is asked to answer and score their family against 15 questions, unless the family are supported and contained in safe way, their responses to discrepant scores amongst family members have the potential to cause damage to relationships.

At EIF we are working with a number of areas to support them in exploring which tools they should use to measure the effects of their early help and early intervention services depending on the context, priorities and purpose of the area of work. There are really useful experiences which we can learn from and it is reassuring to see that others have also experienced the challenge of trying to select the right tool.

If these issues resonate with your own experience, or if you have some valuable insights of your own, we’d like to invite you to share these with us and others across early help services. At EIF we want to draw on our work with local places to develop a guide to help operational services and commissioners select measurement tools that fit with their objectives. Please get in touch if you would like to contribute your perspective at info@eif.org.uk.