21 Dec, 12 | by BMJ
This blog reports on a workshop held by the Department of Health on 28 November 2012 under Chatham House rules, that is the discussion can be reported but not attributed.
This workshop aimed to provide a framework for “a part of the government exploring use of QALY weights” in value based pricing (VBP), specifically:
- Burden of illness: does the social value of a QALY vary by size of unmet need (“QALYs lost per patient with current best practice treatment”)?
- Therapeutic innovation and improvement: does social value vary with “magnitude of QALY gain provided by a treatment?”
- End of life (EOL): does social value vary by “patients life expectancy with current best practice treatment?” (quotes are from the Department of Health briefing papers).
Outside the remit for this workshop were issues dealt with at a previous workshop to do with wider social benefits such as effects on carers, and on patients return to work.
Three presentations covered a review of the relevant literature and two recent surveys.
The literature review suggested that people attached a premium to severity (defined differently from burden of illness) but the evidence was not capable of quantifying this in the form of “weights” for different levels of severity. It was suggested that the relationship between social value and capacity for health gain might not be linear, but might curve in a way which gave a greater weight to severity (such as by means of proportional as opposed to actual shortfall). Such approaches were reported as having been applied in Norway and Netherlands.
A published survey to do with cancer drugs was summarised. It explored societal preferences for the cancer drugs fund and other criteria, including those proposed for rewarding new medicines under the future VBP system, by means of a choice based experiment with 4,118 UK adults via web based surveys. Preferences were determined by asking respondents to allocate fixed funds between different patient and disease types reflecting nine specific prioritisation criteria. Respondents supported prioritisation for severe diseases, and for unmet needs, provided they offered substantial health benefits. The results did not support the end-of-life premium, the prioritisation of children, disadvantaged populations, the special funding status for treatments of rare diseases, nor the Cancer Drugs Fund.
A second unpublished survey was reported which had been commissioned by the department. This was a complex, discrete choice experiment, with pilot, feedback, and qualitative elements. Over 3,000 people were interviewed online but with somewhat different results than the other survey. This found that size of QALY mattered but at a decreasing rate (supporting Therapeutic Innovation and Improvement), end of life was also prioritised (higher social value if short life expectancy or at end of life). But it found no support for Burden of Illness (QALYs lost per patient with current best practice treatment).
The research presented supported prioritisation for severity in some form but not necessarily in the form of Burden of Illness which is proposed for Value Based Pricing. Some research supported a higher weight for big as opposed to small QALY gains (supporting Therapeutic Innovation). The results were mixed regarding prioritising patients close to the end of life. It was not possible to reach consensus on which dimensions the public value, whether positively or negatively, let alone enable these preferences to be quantified as weights to the QALY.
The contrasting results of the two surveys might have been due to how they handled age. Age was included in one survey but explicitly excluded in the other survey for legal and political reasons.
Overall, I left convinced that it was proving very hard to improve on NICE’s current method of informed deliberation. But that NICE’s current exceptions to “a QALY is a QALY is a QALY” in the form of the cancer drugs fund and the end of life criteria get little or no support. How these conundrums will be resolved will be the subject of a further workshop early in 2013.
James Raftery is a health economist with several decades experience of the NHS. He is Professor of Health Technology Assessment at Southampton University. A keen “NICE-watcher,” he has provided economic input to technical assessment reports for NICE but has never been a member of any of its committees. The opinions expressed here are his personal views.