Good coach, great coach: tackling the challenges of evaluating practice
EIF evidence director Tom McBride reflects on the importance and challenges of taking an evidence-based approach to questions about professional practice - certainly in providing early intervention support for children, and maybe also on the rugby field.
For me, October means that my Sundays are spent coaching rugby to the under-7s at my local club. As I am sure anyone who has ever tried to teach a group of 6-year olds to do anything will recognise, how good a coach you are is often determined not by what you teach but by how you teach it. How engaging you are, your ability to communicate clearly, and the rapport you build with the players are all crucial to making sure they have fun and learn the basics of the game.
Of course, this view is not limited to trying to teach 6-year-olds to play rugby in the rain. I am sure we all recognise that there are many areas of life where the skills of a practitioner, be they a volunteer or professional, and the quality of the relationships they build are absolutely essential elements of their work with individuals. At EIF, we are clear that for all practitioners – from teachers to health visitors to social workers and beyond – high-quality practice is a prerequisite for achieving good outcomes when working with children and families. I find Tim Moore’s work on authentic relationships and evidence-informed decision-making a very useful way of understanding why practice matters and how we know that it does.
However, professional practice poses a challenge for what works centres like EIF. It is hard to apply many of the generally accepted approaches to impact evaluation to questions about practice. For example, it is easier to run a randomised controlled trial of a mentoring programme to assess whether it improves outcomes for children, than it is to evaluate the quality of the individual relationship that mentors build with mentees or to distil the essential elements of that relationship. There are at least three broad reasons why evaluating the impact of specific practices, such as relationship-building or communication, can be difficult:
- It is hard to codify high-quality practice. Regardless of their knowledge of rugby (or any other sport), most people can tell the difference between a good coach and a terrible one. But can we tell the difference between a good coach and a great one? Similarly, there are challenges to understanding which particular elements or ingredients of professional practice are most important to describing quality.
- It is hard to transfer high-quality practice. Assuming we are able to isolate the most effective practices, what is the best way of spreading this knowledge? Training courses, teaching manuals, online tutorials, peer mentoring? While we all hope we have the capacity to improve in our work, can being a great coach or a great practitioner be taught, or is it at least in part about inherent skills and abilities?
- It is hard to measure changes in practice. Measuring improvements in practice often requires observation, which is time-consuming and costly. And even when we do observe practice, there are challenges in understanding what high quality looks like. How do we measure the quality of what a family support worker does with a family? And even if we can be confident that their practice has improved, how do we go about measuring whether or not that change has had a positive impact on the outcomes for a child?
Last month I wrote a blog on our ‘common elements’ approach to codifying effective practice – in this case, in relation to how teachers and school leaders can support social and emotional skills in primary school children. I described a how we took a systematic approach to identify the routines and approaches used in programmes which have been shown to improve children’s social and emotional skills. This is one attempt at bringing a ‘what works’ mindset to questions about practice – but there is lots of other interesting and valuable work focusing on the relationship between evidence and practice. For example, the recent Fostering Effective Early Learning study in Australia demonstrated the impact that professional development can have on the quality of curriculum, practitioners’ interactions with children and, crucially, child outcomes. Closer to home, the Department for Education’s recently announced early years Professional Development Programme will also be evaluated via an RCT, to test the impact that this programme has on the quality of settings and outcomes for children in England.
Other What Works centres are also doing lots of exciting and innovative work to think about how practice can be evaluated. For instance, the Education Endowment Fund recently announced that they will be conducting an evaluation of the most effective ways to start a science lesson, while What Works for Children’s Social Care has recently announced evaluations of family group conferencing and family drug and alcohol courts as part of their ‘Supporting Families: Investing in Practice’ programme. Many of you will already know the work of Research in Practice on evidence-informed practice, which seeks to integrate research evidence with user experience and practitioner wisdom.
We recognise that an important element of supporting committed professionals to work with children and families in the best possible ways is helping to evaluate different forms of practice, so we can understand and share what is most effective. Evaluating practice presents us with lots of challenges, but as an evaluation community we need to find ways of rigorously assessing the most effective methods and approaches – and so I am pleased to see an increasing focus on this across the What Works community and beyond. At EIF, we will continue to develop approaches to assessing the impact of practice and to build partnerships with like-minded organisations. On a personal level, I await the first evaluation of common approaches to coaching rugby to 6-year-olds with great anticipation.