Last week saw the start of the England Men’s and Women’s Cricket Teams T20 Series against West Indies and India respectively. T20 is now an integral part of the international cricket calendar but it is a relatively new phenomena, with the first T20 World Cup played in 2007. Although there are still the purists who think it’s not real cricket, it is difficult to argue that it hasn’t brought in a new audience and impacted on all forms of the game in the last decade.
Nowhere has this been more obvious than with the Men’s West Indies team who failed to qualify out of a group of three (two qualified) in 2007 and ten years later were World Champions. What changed? Their use of everyone else’s data!
Statistical analysis of ‘what works’ encouraged them to bat second every time in the 2016 World Cup because they knew that teams chasing a total were more successful than defending one. They also embraced boundary hitting; even though they knew they would face more balls that weren’t scored off, they still accumulated bigger totals. They didn’t worry about losing wickets either because they knew more big hitters were waiting further down the order. They even promoted bowlers to open the batting because the data told them that whilst on average, they were out within two overs, they would have a strike rate of 1.7 runs off every ball and soften it for the batsman to come with every hit. They also crunched the data on selection, bowling and fielding and the resulting innovations led to them being recognised as probably the greatest T20 side ever.
Substance have spent the last decade encouraging organisations to focus on interrogating their own data so they can make informed decisions about which aspects of their work can be improved so that outcomes are better for participants and communities. We know that this information has enabled staff to make tweaks to their existing practice and even on occasions have a Eureka moment. However, we also know now that there are hundreds of sport and non-sport based organisations delivering ‘similar’ projects and that someone, somewhere will be doing it marginally better, and on occasions much better, than everyone else. We believe that when a team of people understand and are committed to delivering what constitutes Great Practice it has the potential to be transformative.
Last year I introduced the idea of Competitive or Dynamic Benchmarking in the Sport for Development sector in a blog post. I believed that benchmarking new and existing services would give confidence to commissioners, improve their understanding of what great practice looked like, as well as improving data collection, encouraging innovation and ultimately improving outcomes for service users. Now that a few organisations have gone through the Scores benchmarking process I thought it was worth reflecting on how I think this will help those organisations in the short and medium term.
Firstly, it is worth saying not only how effectively teams of people who are focused in one particular thematic area can work together, but also how much different teams rely on each other to achieve what are essentially shared targets. Not unlike the batsmen, bowlers and fielders in a successful cricket team.
Although it is early days in our benchmarking work, the following points are worth making:
-
- To some extent benchmarking of performance already existed within each organisation. While this was often expressed as a funder’s requirement, the idea of comparing what had been achieved to a target was standard practice. However, these benchmarks were often seen as arbitrary and when they were met or surpassed, there was no real acknowledgement of the achievement or what it meant to the organisation or service users.
-
- The process has enabled frontline staff to consider more carefully which types of data are the most relevant when measuring their practice; this often includes information on the characteristics of the people they work with; the extent to which they engage in a project, what they and others think about it and what difference it has made. While information requested by funders may be included, what often emerges now is a more sophisticated understanding of what really makes the difference.
-
- Staff are more confident about the quality of the measures they are left with at the end of the benchmarking process, particularly when validated questionnaires that are recognised by commissioners are used to capture behaviour change amongst the most vulnerable participants.
-
- The benchmarking process provides time for reflection and enquiry. It is no great surprise when the majority of projects are revealed as Okay or Good, rather than Great. It isn’t a surprise either to see how quickly the teams of people who are responsible for the projects then start out on the journey of implementing new ideas which will improve the quality of their work.
-
- Organisations were motivated by the thought of all of their constituent parts being Great. Not unlike a world class cricket team, this means improving the performance of everyone, even if only marginally.
For more information on our benchmarking work please join us at the Scores Webinar on Thursday April 4th, 11am – 12pm, email info@substance.net or call us at 0161 2445418.