There’s not a day goes by on Twitter without educators arguing over their preferred pedagogies, and which one has ‘proven’ to be the most ‘effective’. I use inverted commas because proof, absolute proof, is a slippery concept. And the interpretations of effectiveness are equally complex – most measures of effectiveness focus upon academic performance: crudely, does the given innovation improve test scores? Let’s put aside the question of whether, in a world where employers and colleges are increasingly disregarding terminal exam performances, and students ability to regurgitate content, this is the most commonly cited measure of effectiveness. The other key question lies in what’s known as the ‘experimental science’ model.
Most new educational theories are developed using the experimental science model, where an external academic – or sets of academics – set up pilots using small groups of schools. These pilot programmes are often sold as ‘evidence-based’ randomised control trials, where variations are minimised (RCTs hate variations) in order to replicate laboratory conditions. Now that RCTs are all the rage in education, schools are increasingly not fashioning their own innovations.
Instead, they’re told what they need to do by regulatory authorities based on experimental science research initiatives. The exhortation goes like this: “we’ve worked out ‘what works’ – now you need to do it”. The problem is ‘what works’ doesn’t – or rather it works, but only if your school happens to share most of the same contexts and conditions that were seen in the original pilot. And that’s the point: schools exist within messy, complex and various contexts. ‘What works’ silver bullets frequently turn out to be duds when scale-up happens. And it is often in scale-up stage, when less than impressive results occur, that two opposing educators can cite completely contradictory evidence. Or two opposing politicians citing the evidence that suits their world view, hence the phenomenon of ‘policy-based evidence making’ (and not what they claim to be ‘evidence-based policy-making).
So, if the experimental science method is flawed, is there any alternative? Thankfully, yes. And the good news is that it puts schools back in the driving seat when it comes to ‘getting better at getting better’.
We’ve recently begun training school leaders and teachers in a new, exciting, approach to building a culture of adult learning and a culture of continuous improvement. Our excitement stems from the possibility – and it’s so new that it’s only that – of this approach unlocking the key to the holy grail of educational reform: self-improving schools (and teachers). It’s called Improvement Science.
What is Improvement Science?
Improvement Science has been around for a long time – pioneered by the management genius, W.E. Deming. It has been used successfully in the aviation, automotive and healthcare industries and now the Improvement movement has finally reached education. It provides a different slant on innovation, offering an alternative to disruptive change, by supporting educators in ‘getting better at getting better’. It’s not a one-off event, or pilot initiative, but rather a continuous cycle of incremental gains. Despite the simplicity of the approach, spectacular results have been achieved, using improvement methodology:
- In sport, Sky cycling’s boss Dave Brailsford championed the “aggregation of marginal gains”. Believing that if they only made
1% improvement in any given aspect of planning, but made enough gains across the board, they could win the Tour de France within five years. He was wrong – they won it in three years;
- In some US hospitals mortality rates from severe sepsis have halved in less than three years, through a focus upon improvement methodologies;
- Through adopting W.E. Deming’s improvement cycles, Japanese companies like Toyota and Sony came to rapidly dominate their markets.
Fine, but what’s this got to do with learning and engagement?
The recent history of educational reform has seen a slew of data-driven targets, imposed ‘solutions’, increased accountability (also known as ‘fix it, or else’) and the growth of mandates and inspections. Despite decades of increased effort and expenditure, the returns have been disappointing, and good practices obstinately refuse to scale. Deming argues that imposing targets only breeds fear and corruption, and that sustainable change can only happen when the practitioners themselves (in this case teachers) become the change agents; when end users (in this case students) are central to the processes being tested, and when improvement is seen as a continual process.
At its heart, the Improvement Science model only seeks to answer three questions:
1. What are we trying to accomplish?
2. How will we know that a change is an improvement?
3. What changes can we make that will result in improvement?
Behind these deceptively simple questions, however, lie a complex set of variations (for example, why does a change work in one context but not another?) and, as we’ve seen, variation is the norm for most schools. These questions also infer a profound shift – from privatised practice, to a culture of open learning. If Improvement Science can achieve dramatic impact in the complex and specialised world of healthcare, why can’t it work in the equally complex world of learning?
For schools, taking an Improvement Science approach and joining – or creating their own – Networked Improvement Communities is reclaiming some of the ground that’s been conceded to others who have determined the agenda they should follow, the song they should sing. It will almost certainly lead to performance gains but, because of the Hawthorne Effect (where any innovation usually creates short term benefits) that isn’t the point. Schools who commit to cycles of continuous improvement are more likely to see sustained long-term, performance gains. More than that, however, is the prospect of schools becoming collaborative centres of open learning, sharing what they’re learning in an open culture, where access to internal knowledge is free and practitioners are trusted to find their own solutions. It’s this culture shift, that we’re seeing in the early schools we’re working with, that is so exciting.
So, how will this work?
We’re looking to partner with a small number of schools, clusters of schools, and administrations in creating open learning cultures through learning how to improve. We continue to believe that engaging learners is key to their long-term chances in life. But engagement alone, isn’t enough. Because if we want deep and engaging learning for our students, and if we want successful practices to scale, then we also have to have in place what Tony Bryk, of the Carnegie Foundation, highlights as the other two ‘e’s: effectiveness and efficiency. Improvement Science offers a route to self-determination in all three of those elements. I’d urge educators to consider adopting Improvement Science methods in their schools – if nothing else, it’ll put an end to those silly Twitter spats.