You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Paul Glasziou and Iain Chalmers: Is 85% of health research really “wasted”?

14 Jan, 16 | by BMJ

paul_glasziou2Iain-Chalmers

Our estimate that 85% of all health research is being avoidably “wasted” [Chalmers & Glasziou, 2009] commonly elicits disbelief. Our own first reaction was similar: “that can’t be right?” Not only did 85% sound too much, but given that $200 billion per year is spent globally on health and medical research, it implied an annual waste of $170 billion. That amount ranks somewhere between the GDPs of Kuwait and Hungary. It seems a problem worthy of serious analysis and attention. But how can we estimate the waste?

Let’s break up the 85% figure by its components. The easiest fraction to understand is the fraction wasted by failure to publish completed research. We know from follow up of registered clinical trials that about 50% are never published in full, a figure which varies little across countries, size of study, funding source, or phase of trial [Ross, 2012]. If the results of research are never made publicly accessible—to other researchers or to end-users—then they cannot contribute to knowledge. The time, effort, and funds involved in planning and conducting further research without access to this knowledge is incalculable.

Publication is one necessary, but insufficient, step in avoiding research waste. Published reports of research must also be sufficiently clear, complete, and accurate for others to interpret, use, or replicate the research correctly. But again, at least 50% of published reports do not meet these requirements [Glasziou, 2014]. Measured endpoints are often not reported, methods and analysis poorly explained, and interventions insufficiently described for others—researchers, health professionals and patients—to use. All these problems are avoidable, and hence represent a further “waste.”

Finally, new research studies should be designed to take systematic account of lessons and results from previous, related research, but at least 50% are not. New studies are frequently developed without a systematic examination of previous research on the same questions, and they often contain readily avoidable design flaws [Yordanov, 2015]. And even if well designed, the execution of the research process may invalidate it, for example, through poor implementation of randomization or blinding procedures.

Given these essential elements—accessible publication, complete reporting, good design—we can estimate the overall percent of waste. Let us first consider what fraction of 100 research projects DO satisfy all these criteria? Of 100 projects, 50 would be published. Of these 50 published studies, 25 would be sufficiently well reported to be usable and replicable. And of those 25, about half (12.5) would have no serious, avoidable design flaws. Hence the percent of research that does NOT satisfy these stages is the remainder, or 87.5 out of 100. In our 2009 paper, we rounded this down to 85%*.

Although the data on which our estimates were based came mainly from research on clinical research, particularly controlled trials, the problems appear to be at least as great in preclinical research [Macleod. 2014]. Additionally, our 2009 estimate did not account for waste in deciding what research to do and inefficiencies in regulating and conducting research. These were covered in the 2014 Lancet series on waste, but it is harder to arrive at a justifiable estimate of their impact.

If research was a transport business, we would be appalled by these data. Half the goods carried would be badly designed, half lost in shipping, and half of the remainder broken by the time they arrived—a truly heart breaking waste. The “good news” is that there is vast potential gain from salvage operations! Either rescuing sunken trials from the bottom of the ocean, or repairing the damaged ones, might retrieve up to 75% of the waste (we cannot retrospectively fix poor design). These salvage and repair operations may be the most cost-effective way of improving the yield from research: a few percent of the current budget could be used to recover lost and poorly reported research, as proposed by the AllTrials campaign. However, we need to press on with that salvage: data from studies are being lost forever at a rate of perhaps 7% per year [Vines, 2014]. We certainly should, and must, attend to that—indeed it seems both an economic and an ethical imperative—but we also need to improve the processes and incentive systems in research. This is the motive that led to the launch of the REWARD Alliance, which held its first conference in Edinburgh in September 2015. The Alliance is currently working with funders, regulators, publishers, organisations, and others to reduce waste and add value.

*Footnote: If you are concerned about the correlation between steps, first note that the studies of reporting were of the published studies only, so the dependence in those steps is accounted for. We do assume independence between avoidable design flaws and publication, but the Ross study suggests the correlation is only modest, so the rounding to 85% we still think gives a reasonable assessment.

Paul Glasziou is professor of evidence based medicine at Bond University, and chairs the International Society for Evidence Based Health Care. His research focuses on improving the clinical impact of research. As a general practitioner, this work has particularly focused on the applicability and usability of published trials and systematic reviews.

Competing interests: None declared.

Between 1978 and 2003, Iain Chalmers helped to establish the National Perinatal Epidemiology Unit and the Cochrane Collaboration. Since 2003 he has coordinated the James Lind Initiative’s contribution to the development of the James Lind Alliance, the James Lind Library, Testing Treatments interactive, and REWARD.

Competing interests: IC declares no competing interests other than his NIHR salary, which requires him to promote better research for better healthcare.

References:

1. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009 Jul 4;374(9683):86-9.

2. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ. 2012 Jan 3;344:d7292.

3. Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014 Jan 18;383(9913):267-76.

4. Yordanov, et al. Avoidable waste of research related to inadequate methods in clinical trials. BMJ 2015;350:h809

5. Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014 Jan 11;383(9912):101-4.

6. Vines TH, Albert AY, Andrew RL et al. The availability of research data declines rapidly with article age. Curr Biol. 2014 Jan 6;24(1):94-7.

By submitting your comment you agree to adhere to these terms and conditions
  • Adam Jacobs

    “We know from follow up of registered clinical trials that about 50% are never published in full”

    I’m not sure you do know that. That 50% figure is very widely quoted, but is based more in being a nice soundbite than in actual evidence. Recent studies of disclosure rates show that somewhere in the region of 80-90% of studies are published these days.

    More on those statistics here:

    http://www.statsguy.co.uk/zombie-statistics-on-half-of-all-clinical-trials-unpublished/

  • anabhan

    Brilliant. “new research studies should be designed to take systematic account of lessons and results from previous, related research, but at least 50% are not. New studies are frequently developed without a systematic examination of previous research on the same questions, and they often contain readily avoidable design flaws. And even if well designed, the execution of the research process may invalidate it, for example, through poor implementation of randomization or blinding procedures.
    This says it all. I do hope we can change that globally.

  • Bernard Carroll

    For too many the name of the game is to keep the game going in service of funding and of safe, established paradigms. If they happen to move the ball down the field, that’s just a nice bonus.

  • vijay

    Not to trivialise the discussion, but don’t you think transportation is the wrong analogy? The assumption underlying that comparison, it seems to me, is that research aims to optimise efficiancy. Whereas research ought to have other roles that would be more important. The wastefulness you speak of seems to be a persistent feature of riskier propositions (the one that comes to mind is that of start-ups/venture capitalism). Although it’s likely that analogy isn’t perfect either.

    I would like to see analysis as to why research that is completed, does not enter publication (I could speculate on a few reasons, most important of which would be the absence of “significant” results).

    While the argument that this data ought to be salvaged and made use of, is a great one, I don’t know whether the push to publish work that is performed, merely because money and man-hours have been spent on it, is a good one. Already, it appears as if publishing in the scientific literature is an end in its own right, with no considerations as to what the reader is to do with all this information. Mandating that studies that are performed must be published in the same way (as some votaries are suggesting, although I don’t know whether the authors of this piece would fall into that category) would, I fear, make the scientific literature even more inaccessible to the average practitioner.

  • Paul Glasziou

    From a recent review of 38 surveys (www.ncbi.nlm.nih.gov/pubmed/25335091) the most common reasons for researchers not publishing were: “lack of time or low priority (median 33%), studies being incomplete (median 15%), study not for publication (median 14%), manuscript in preparation or under review (median 12%), unimportant or negative result (median 12%), …” For trials at least, there is now the option to post results in a Clinical Trials Registry (which would be searchable) but that is not yet true of most research.

  • Paul Glasziou

    Being precise about the number of unpublished studies is intrinsically difficult particularly for the unregistered, but the 50% figure remains reasonable.
    The studies cited are very selective and focus mostly on registered studies in ClinicalTrials.gov – already unrepresentative of the global rate.
    Bourgeouis estimates publication rate of 66% but in the Discussion state that “a substantial number of trials were not registered before the study start date or the publication of trial results”.
    That suggests a selection bias: some authors are only registering at the time they proceed to publication, whereas some unpublished studies do not register. Hence even the 66% is an overestimate. Bourgeouis then notes that a further 14% of unpublished studies had “posted in study result registries or company Web sites”, which has been a requirement for US clinical trials since 2007 but is only partially complied with (see https://clinicaltrials.gov/ct2/resources/trends). But that US requirement is far from a global requirement, and only applies to the tiny fraction of research that is clinical trials. Improving the disclosure rate of ALL unpublished studies is essential.

You can follow any responses to this entry through the RSS 2.0 feed.
BMJ blogs homepage

The BMJ

Helping doctors make better decisions. Visit site



Creative Comms logo

Latest from The BMJ

Latest from The BMJ

Latest from BMJ podcasts

Latest from BMJ podcasts

Blogs linking here

Blogs linking here