11 Feb, 16 | by BMJ
This blog was originally written for thebmj and posted on bmj.com/blogs
Our estimate that 85% of all health research is being avoidably “wasted” commonly elicits disbelief. Our own first reaction was similar: “that can’t be right?” Not only did 85% sound too much, but given that $200 billion per year is spent globally on health and medical research, it implied an annual waste of $170 billion. That amount ranks somewhere between the GDPs of Kuwait and Hungary. It seems a problem worthy of serious analysis and attention. But how can we estimate the waste?
Let’s break up the 85% figure by its components. The easiest fraction to understand is the fraction wasted by failure to publish completed research. We know from follow up of registered clinical trials that about 50% are never published in full, a figure which varies little across countries, size of study, funding source, or phase of trial. If the results of research are never made publicly accessible—to other researchers or to end-users—then they cannot contribute to knowledge. The time, effort, and funds involved in planning and conducting further research without access to this knowledge is incalculable.
Publication is one necessary, but insufficient, step in avoiding research waste. Published reports of research must also be sufficiently clear, complete, and accurate for others to interpret, use, or replicate the research correctly. But again, at least 50% of published reports do not meet these requirements. Measured endpoints are often not reported, methods and analysis poorly explained, and interventions insufficiently described for others—researchers, health professionals and patients—to use. All these problems are avoidable, and hence represent a further “waste.”
Finally, new research studies should be designed to take systematic account of lessons and results from previous, related research, but at least 50% are not. New studies are frequently developed without a systematic examination of previous research on the same questions, and they often contain readily avoidable design flaws. And even if well designed, the execution of the research process may invalidate it, for example, through poor implementation of randomization or blinding procedures.
Given these essential elements—accessible publication, complete reporting, good design—we can estimate the overall percent of waste. Let us first consider what fraction of 100 research projects DO satisfy all these criteria? Of 100 projects, 50 would be published. Of these 50 published studies, 25 would be sufficiently well reported to be usable and replicable. And of those 25, about half (12.5) would have no serious, avoidable design flaws. Hence the percent of research that does NOT satisfy these stages is the remainder, or 87.5 out of 100. In our 2009 paper, we rounded this down to 85%*.
Although the data on which our estimates were based came mainly from research on clinical research, particularly controlled trials, the problems appear to be at least as great in preclinical research. Additionally, our 2009 estimate did not account for waste in deciding what research to do and inefficiencies in regulating and conducting research. These were covered in the 2014 Lancet series on waste, but it is harder to arrive at a justifiable estimate of their impact.
If research was a transport business, we would be appalled by these data. Half the goods carried would be badly designed, half lost in shipping, and half of the remainder broken by the time they arrived—a truly heart breaking waste. The “good news” is that there is vast potential gain from salvage operations! Either rescuing sunken trials from the bottom of the ocean, or repairing the damaged ones, might retrieve up to 75% of the waste (we cannot retrospectively fix poor design). These salvage and repair operations may be the most cost-effective way of improving the yield from research: a few percent of the current budget could be used to recover lost and poorly reported research, as proposed by the AllTrials campaign. However, we need to press on with that salvage: data from studies are being lost forever at a rate of perhaps 7% per year. We certainly should, and must, attend to that—indeed it seems both an economic and an ethical imperative—but we also need to improve the processes and incentive systems in research. This is the motive that led to the launch of the REWARD Alliance, which held its first conference in Edinburgh in September 2015. The Alliance is currently working with funders, regulators, publishers, organisations, and others to reduce waste and add value.
*Footnote: If you are concerned about the correlation between steps, first note that the studies of reporting were of the published studies only, so the dependence in those steps is accounted for. We do assume independence between avoidable design flaws and publication, but the Ross study suggests the correlation is only modest, so the rounding to 85% we still think gives a reasonable assessment.
Paul Glasziou is professor of evidence based medicine at Bond University, and chairs the International Society for Evidence Based Health Care. His research focuses on improving the clinical impact of research. As a general practitioner, this work has particularly focused on the applicability and usability of published trials and systematic reviews.
Between 1978 and 2003, Iain Chalmers helped to establish the National Perinatal Epidemiology Unit and the Cochrane Collaboration. Since 2003 he has coordinated the James Lind Initiative’s contribution to the development of the James Lind Alliance, the James Lind Library, Testing Treatments interactive, and REWARD.