First published on Oct 28th 2011
In 1972, Archie Cochrane launched the new phase of healthcare, when he observed an almost random level of variation in successful outcomes of supposedly similar treatments at different hospitals. The seeds were sown for evidence-based medicine, and application of scholarship to discover what constituted best known practice for medical treatments and interventions. His name is immortalised through the work of the Cochrane Collaborative which relies on the collective efforts of many scholars critically reviewing and synthesising published research to establish gold standards for all to see – doctors and public alike.
So, nearly 40 years down this road, some of the best minds have been pressured into agreeing a new indicator which seeks to predict what levels of mortality are to be expected at each hospital in the country. If you have been following the debate, you will be aware how much this has led to intense deliberation, argument, counter-argument and near rebellion on occasions.
No-one is claiming that this is an easy task. We know, for instance that there are direct links between disease and socioeconomic status, geography, gender, ethnicity and lifestyle factors such as smoking. We also know that different treatments and interventions carry different risks. We aren’t surprised therefore, to realise that the predicted mortality rate for a hospital will be affected by the complexity of care it provides and the characteristics of the population it serves.
So you would expect it to be difficult to predict accurately how many people are likely to die within 30 days of being treated in hospital. That is what standardisation is all about, and why the indicator is called standardised hospital mortality indicator (SHMI). No-one has ever pretended that delivering healthcare is easy. Doctors have to do difficult things most days. Nurses do different things, equally difficult. Managers too have their share of difficult things, but, despite being politicians’ favourite targets for abuse, they contribute to the success or otherwise through their planning and management of resources to keep the healthcare system running.
So it is right to expect people who do difficult things to be accountable to the public whom they serve. You would expect someone to be accountable for predicting the likely number of deaths a hospital can expect. You would expect the publication of such information to be contentious because it takes away some of that air of mystery that professionals can generally get away with creating.
Two surgeons with the same mortality rate can in reality be at the opposite end of performance – the very best and the very worst sharing the same raw number. The one who takes on all the cases that are too risky for anyone else to consider, and in so doing saves many lives, offers a completely different level of care from that of the incompetent surgeon who fails on even the easy cases, and ends up taking lives that didn’t need to be lost.
And raw numbers can indeed tell such a tale. Unfortunately, within a society where most people run scared of numbers, our first reaction is more likely to be to worry about how people will misuse such raw figures, instead of concentrating on helping overcome their fear so that they do know what to do with them. I always say that the data rarely provides answers, its power is in helping you to understand what questions to ask. And asking the right questions in the above example very quickly leads to a clear understanding of what is happening. Only the incompetent in our example has something to fear amongst an educated population, and rightly so!
So, when we look for the evidence on mortality, we expect clever people to work their magic in such a way that they can help Joe and Miranda Public to see whether their local care services are doing a good job or not. The experts need to put twice as much energy into educating people how to interpret those numbers, than they do in producing the right values in the first place. The sort of good job Archie Cochrane was worried about back in 1972.
So when the Information Centre published its new evidence exposing the considerable variation in performance between hospitals, we expected transparency on the numbers, supported by meaningful education to allow people to make sense of it for themselves. Instead of this, energy seems to have been spent on building the smoke screen behind which the hospitals and clinicians can continue hiding from the ongoing and horrific reality of unwarranted variation.
Of course we want all hospitals to offer the same high quality outcomes, but we aren’t there yet, and until we reach such a utopia, we should each have the chance to make the personal choice of how much inconvenience we will accept so that we can get to the very best, if we so wish. Alternatively, we might choose the extra risk so that we have minimum disruption. And that trade-off is not as obvious as you might think. People diagnosed with cancer in the Isle of Wight frequently choose a regime with an inferior prognosis so that they can stay at home on the island, rather than choosing a much more intensive treatment regime away from family and friends in Southampton. That is what choice is about, and why it is so important.
But sadly, those clever people in the Information Centre have determined that the data on mortality is far too complex to translate into an accessible form for the public to digest. Instead it is presented, buried in complex, highly caveated reports aimed at fellow statisticians.
Now I know a thing or two about statistics. I know that most people do indeed have more legs than the average person! So I’ve looked at the data with interest.
I live in Ipswich, and I know that no relative of mine has had a good experience from my local hospital, and that its leaders only ever face the light of day defensively. I was pleasantly surprised to discover that its performance is pretty close to the centre of the band you would expect it to be in. I was even more surprised to discover that the hospital local to Cass is predicted to have a similar number of deaths each year, despite providing much more specialist care as well as facing the more complex health demand of its east- end population. But instead of the same number of deaths as predicted, it positively glows at number two in the country for lowest mortality. It only had around 68% of its predicted number of deaths. Well done Barts and the London! But if I got off the train in Colchester by mistake, then instead of a similar number again, this time I would have found data pointing to worse mortality: 5 deaths for every 3 at Barts and the London.
We may be delivering more sophisticated treatments, and calculating some pretty clever stuff to produce these figures compared with Archie’s day. Like him, we know that we must avoid ascribing too great a precision to our findings because the statistical significance of each of the standardised mortality figures is never better than 5-10% or so. But even though these subtle niceties may be lost in translation for many people, we have to trust the public with such serious trade-off as 5 versus 3. After all, this isn’t 5 tins of beans from Morrismart for the price of 3 from Markrose. This is about lives. Shorter than necessary lives. Well and truly short-changed!