Quality accounts were introduced by the last administration to provide a reliable, jargon-free measure of achievement that everyone could understand. Have services improved and by how much? How is this reflected in the experience of patients? Where is the evidence?
As recently as last April, the King’s Fund wrote: “Quality accounts provide a real opportunity to increase public accountability on quality.”
Yesterday, the same organisation produced a damning annual report for year one, which concluded that quality accounts “failed to provide the public with meaningful information about the performance of local health services”.
The coalition government liked the idea of quality accounts enough to give them a prominent role in its vision of a patient-centric, outcome-oriented and accountable NHS.
What a pity, then, that the first crop of quality accounts is so poor.
According to the King’s Fund study, which reviewed about a quarter of the 2009/10 submissions, there is “significant variation in the quality and presentation of the information published”.
Only one in five accounts provides benchmarking data and half fail to compare performance with the previous year. Presentation is imaginative. “One NHS trust expanded the vertical axis on a bar chart to suggest a three-fold improvement in cleanliness when the improvement on the previous year was in fact 0.6 per cent,” the report says.
Financial accounts are hard to fudge. You are dealing with things of known value, against which you apply standard procedures according to clear accounting rules. And just in case all of that fails to ensure probity, you bring in external auditors to verify the results. If you cheat and get caught, you go to jail.
Quality accounts are different. Unlike cash or fixed assets, quality is subjective and hard to measure.
That’s not an excuse for not trying. The rules for quality accounts were written to allow trusts to define their own measures. According to the King’s Fund, while some of these should reflect local priorities, we also need comparative data based on national standards and a more objective system of quality assurance.
Nobody wants to put quality in the hands of the bean counters, but as a minimum we need to know who’s accountable. Apparently we can’t even do that.
A footnote to the survey explains that “no comprehensive list exists of all the providers who met the criteria for being required to produce a quality account in 2010”.