Since 2005, the Health Consumer Powerhouse has produced its annual EuroHealth Consumer Index, ranking European health systems according to their performance on a host of indicators around (i) patient rights and information, (ii) accessibility, (iii) outcomes, (iv) range and reach of services, (v) prevention and (vi) pharmaceuticals. In its most recent iteration, the United Kingdom ranked only 14th of 35 countries studied. This is in stark contrast to the assessment by the Commonwealth Fund just a year before, in which the UK was rated as the best performing health system among a set of high-income countries in 2014.
While we understand the excitement surrounding health system rankings, we caution against over-interpreting them and, especially, the EuroHealth Consumer Index which, as we will show, is especially problematic.
Arbitrary scores are given to indicators
The index is constructed by scoring performance in the five areas listed above as good (3), intermediary (2) or not-so-good (1), based on arbitrary cut-off points. Consequently, countries with similar performance will receive very different scores if they are just on either side of the cut-off point. For example, Poland scores not-so-good on “equity of health care systems” because only 69.6% of health care is publicly funded. Yet Slovakia receives an intermediate score as it achieves 70.0% (a whopping 0.4 percentage points more). Switzerland earns a “good” score on this measure, despite its high levels of deductibles and out-of-pocket spending as a share of total expenditure that is twice the EU-15 average, along with high levels of unmet need compared to other countries and many peer-reviewed studies concluding that the Swiss financing system is regressive.
The point system does not reflect what matters to citizens
There is no obvious logic in how many points are allocated to each indicator. For example, all health outcomes indicators are worth a total of 250 points, while accessibility is worth 225 points. Yet there are more outcome indicators (8) than accessibility indicators (6), so that the maximum score on any given accessibility metric (e.g. waiting time) will be higher than on an outcome metric (37.5 compared to 31.25 points). Thus, not only do abortion rates and cancer survival carry the same weight (since both are considered health outcomes) but an outcome indicator like cancer survival counts less than an accessibility indicator like direct access to a specialist.
This seemingly indiscriminate approach to allocating points also means that countries can accumulate similar points by prioritizing different issues, even if these are unlikely to be seen as equivalent by citizens, had they been asked. For example, a hypothetical country coming last on cancer survival and infant deaths earns approximately 10.4 points for each indicator (out of a possible 62.5 total points) leaving around a 41.6 point deficit. Rather than investing in improving these outcomes, it could just compensate for this simply by abolishing gatekeeping, since allowing direct access to a specialist can gain 37.5 points), which may or may not improve outcomes on these measures.
There is no apparent basis for selecting the indicators
Lastly, the indicators are a strange mix of trends over time and cross-sectional rankings. For example, heart disease and stroke deaths are measured as changes over time, whereas hospital-acquired infections use the most recent data point, penalizing countries showing substantial improvements, such as the United Kingdom where the percent of hospital-acquired infections being resistant has fallen from 21.6% in 2010 to 13.8% in 2013. Others, such as Estonia, have seen this figure increase by 2.8 percentage points in the same period (from an admittedly low level, at just 3.5% in 2013) but still receive the maximum points.
Conclusion: We should just ignore the findings of the EuroHealth Consumer Index
While many other health systems rankings that have been widely criticised, such as the 2000 World Health Report, these are far more transparent, methodologically, than the EuroHealth Consumer Index. Yet, there is no “right” way to rank health systems, or any other complex system for that matter. Choices must be made regarding the indicators to be assessed and the values to attribute to them. However, the notion that we can (or should!) rank health systems based on a single measure that seemingly haphazardly combines indicators that have been “scored” what seems to be at random is debatable. Composite indices conceal what is actually going on in health systems, and offer little guidance for policymakers who want to improve the performance of their system.
Although the report accepts that its results are not “dissertation quality” and must be treated “with caution” it draws inappropriate conclusions about the superiority of one system versus another one, leading to uninformed recommendations and assertions that display limited understanding of health systems. This is patently irresponsible. There is great potential for countries to learn from each other through careful comparison but the EuroHealth Consumer Index’s use of poorly constructed composite indices of uncertain origin is unlikely to inform any evidence-based policy development.
Jonathan Cylus, Ellen Nolte, Josep Figueras and Martin McKee.
European Observatory on Health Systems and Policies, Brussels and London
Competing interest: The European Observatory on Health Systems and Policies is a partnership of governments, universities and international agencies that undertakes international comparisons of health systems