Quality indicators tend to proliferate over time in health systems, and the NHS in England is no exception. Tracking and measuring the quality of care has multiple functions, including holding the NHS to account and managing performance (both prominent in the case of the NHS as a centrally driven system funded by general taxation). Data are collected on hundreds of indicators by hospitals and other providers and make their way into detailed spreadsheets held by a large range of bodies, including NHS England, Public Health England and NHS Digital, to name just a few. There have been several iterations of national ‘frameworks’ which group quality indicators to assess performance, including the current outcomes frameworks, some of which overlap with quality assessments by the regulator, the Care Quality Commission, which also provides information to patients and the public via its rating systems.
In parallel, and somewhat separate, are an array of quality indicators and measurement activities designed to help teams improve their local services, including national clinical audits. Measurement is an essential component of “Quality Improvement” (QI) and often allows for a less rigid approach, designed around the specific needs of an improvement project, in contrast to the robustness needed for national indicators, for example to reliably judge performance across areas.
Is there untapped potential for national measurement?
In our recent exploratory study, we examined the gap between measurement at a national level alongside those indicators captured by local teams. We looked at these two worlds of measurement in relation to three different specialities (breast cancer care, children’s and young people’s mental health care and renal care). Unsurprisingly, the number of national indicators for each was large: for breast cancer alone there were 68 indicators (excluding screening), spread across 23 sources, some of which were difficult to locate online (often comprising multiple spreadsheets and large Excel workbooks to download and navigate).
Interviews with clinicians and managers in trusts told us two things. First, there is some familiarity with national indicators (particularly waiting times for cancer and mental health), but limited use made of most of them because it was either delayed or was not broken down to small enough units to be helpful to teams. There were exceptions: the renal team were enthusiastic users of the Multi-site Dialysis Access Audit, and although the breast cancer teams were not familiar with the source of national cancer data (the National Cancer Intelligence Network’s cancerstats), useful data were being fed back via their local Cancer Alliance.
Second, there was appetite among local clinical teams to have better access to national quality data. Many were collecting their own data, using small clinical audits, and had ideas about what indicators of quality were missing, and might be useful (including more national benchmarked feedback focussed on patient experience). Some clinical teams were also joining regional and intermediate bodies to improve their access to comparative data and fill the gaps in their analytical capacity and capability. As part of our exploration we also brought together all the identified quality improvement indicators into one place, something front line clinicians told us they’d find valuable. This included all the national indicators but, based on our discussions, we also listed the local quality indicators and missing indicators, which was helpful in highlighting what is meaningful to clinical teams beyond the national-level indicators.
Navigating the maze
Does it matter that there is a gap between the two worlds of measurement? There have been instances of policy makers attempting to put national indicator sets into a form that local clinicians might use: in 2009, Lord Ara Darzi launched the now-archived Indicators for Quality Improvement website. But even if data were more easily available, many clinical teams lack the analytical capacity (and time) to do much work with it (and here regional clinical networks, national audits and Royal Colleges fill an important analytical role).
In 2016, the OECD reviewed efforts to improve health care quality in the NHS and noted that the range, format and reporting level of quality indicators was now ‘extremely complex’. With new condition-specific initiatives underway in the wake of the Long Term Plan and major changes being trialled for some of the flagship waiting time indicators, the picture is likely to become even more complex. An overhaul of the panoply of national quality indicators may be overdue: if nothing else, the disconnect between what local clinical teams use and need may be a signal that the national system should be simplified and rebalanced.
Ruth Tholby, Assistant Director of Policy, Health Foundation and Catherine Turton, Senior Analyst,Health Foundation.
Competing interests: None declared