Paul Glasziou and Iain Chalmers: Can it really be true that 50% of research is unpublished?

Whatever the precise non-publication rate is, it is a serious waste of the roughly $180 billion annually invested in health and medical research globally


paul_glasziou2Iain-ChalmersIf 50% of mail we posted never arrived, the outcry would be considerable. Although current estimates are that about half of research goes unpublished, there is little outcry. Maybe that is because the results of research projects are not addressed to a specific person who would notice when they hadn’t arrived; or maybe some think the situation isn’t as bad as implied by the 50% estimate.

Rates of publication have been documented best for clinical trials, particularly since trial registration at inception became more widespread over the past 20 years. In the 1980s and 1990s estimates of trial publication rates were derived from retrospective cohort studies of trial proposals submitted to ethics committees, and from specialist trial registers. In this century, however, mandated trials registration has enabled much larger cohorts of trials to be investigated.

So is the 50% estimate still true for trials with the increased expectations of registration and reporting? And because trials constitute only a small proportion (2-3%) of all biomedical studies, is the 50% figure true for other types of research?

The key obstacle to answering these questions is knowing about all the unpublished research—research’s “dark side of the moon.” At least three methods have been used to estimate the proportion of unpublished studies, using as denominators cohorts of all studies: (i) studies seen by specific ethics committees, (ii) studies presented at specific conferences, or (iii) studies pre-registered in registries. None of these methods captures all studies—not all studies require ethics approval, not all are presented, and few have to be registered. In summary, all methods tend to underestimate the non-publication rate. A recent overview. by Schmucker et al. of 17 cohorts of studies approved by research ethics committees (RECs) found that, on average, 46% were published; and among analyses of 22 studies included in trial registries it was found that, on average, 54% were published. In summary, slightly less than half of the studies (trial and non-trial) approved by ethics committees had been published, and slightly more than half of pre-registered controlled trials had been published.

Some of the studies reviewed by Schmucker et al were quite old however—so do those estimated publication rates still apply? Well, the most relevant recent large study by Chen et al found similar results: of the 4,347 clinical trials registered in ClinicalTrials.gov, 2,458 (57%) had been published and 2,892 (67%) had been either published or results reported without journal publication. The 10% that were reported but not formally published in journals is noteworthy. Chen et al found that 27% had results reported on ClinicalTrials.gov, which provides fields and support for such reporting (and is mandated for US trials). So the bad news is that the rate of publication in journals seems unchanged, but the good news is the results of an additional 10% are available in trials registries. TrialsTracker is attempting to automate the monitoring of publication rates, and provides a breakdown by sponsor. Its current analysis of 29,377 eligible trials found a 55% publication rate (that is, 45% missing).

Maybe it’s only small or poor studies that go unpublished? The best analysis of that possibility found that rates varied little by country, size of trial, or trial type. Unfortunately, the best predictor of publication seems to be whether the study is “positive” or “negative,” which means that the half of the research results we can access is biased. So there is both waste and distortion.

For animal studies, and other pre-clinical studies, we know much less, both because study registries are very rare, and because mandatory ethics clearance is patchy. A survey of animal researchers has reported that they thought that 50% were unpublished, but little direct evidence exists.

Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually invested in health and medical research globally. Non-publication means that researchers cannot replicate or learn from the results found by others—particularly the disappointments, which are less likely to be published. Funders deciding on the gain from new research cannot base that decision on all previous research. Reviewers trying to summarize all the research addressing a particular question are limited by access only to a biased subsample of what has been done.

Although there has been some modest progress in reducing biased under-reporting of research, efforts are still needed to ensure that that all trials registered and reported, and to extend those principles to all studies. A prerequisite for achieving these objectives will be a better understanding of the causes of, and cures for, non-publication.

P.S. Despite the considerable avoidable waste in medical research, from non-publication and other causes, investment in biomedical research is cost-effective and serves the interests of the public. Working to reduce waste to improve the return on investment is important, however, and should not be used as reason to reduce support for medical research, as recently proposed by US President Donald Trump, but sensibly rejected by Congress.

Paul Glasziou is professor of evidence based medicine at Bond University and a part time general practitioner.

Competing interests: None declared.

Between 1978 and 2003, Iain Chalmers helped to establish the National Perinatal Epidemiology Unit and the Cochrane Collaboration. Since 2003 he has coordinated the James Lind Initiative’s contribution to the development of the James Lind Alliance, the James Lind Library, Testing Treatments interactive, and REWARD.

Competing interests: IC declares no competing interests other than his NIHR salary, which requires him to promote better research for better healthcare.

References:

Schmucker C1, Schell LK1, Portalupi S1, et Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One. 2014 Dec 23;9(12):e114023.

Chen R, Desai NR, Ross JS, Zhang W, et al.  Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016 Feb 17;352:i637.

Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009 Sep;6(9):e1000144.

ter Riet G, Korevaar DA, Leenaars M, Sterk PJ, Van Noorden CJF, Bouter LM, et al. (2012) Publication Bias in Laboratory Animal Research: A Survey on Magnitude, Drivers, Consequences and Potential Solutions. PLoS ONE 7(9): e43404.

  • Adam Jacobs

    It’s worth noting that the study by Chen et al that found a 67% disclosure rate looked specifically at academic medical centres. There seems to be a divergence between academia and industry in the rates of reporting. One of the most recent studies of disclosure in industry sponsored research that I’m aware of (http://www.tandfonline.com/doi/abs/10.1080/03007995.2016.1263612 ) found a disclosure rate of 93%.

  • Rogerblack

    Considering the bare publication of a clinical trial is an interesting metric of course.

    However, the fraction of clinical trials that actually publish what they say they are going to publish is much smaller than this.

    Clinical trials that switch outcomes from a secondary outcome to a primary, or even come up with a composite metric as a new primary to get a significant result can reasonably arguably be said to not have published the primary outcome.

    https://www.ncbi.nlm.nih.gov/pubmed/28570573 as one example of an analysis of this.

    “109 RCTs were included. Our analysis revealed 118 major discrepancies and 629 total discrepancies.
    Among the 118 discrepancies, 30 (25.4%) primary outcomes were demoted, 47 (39.8%) primary outcomes were omitted, and 30 (25.4%) primary outcomes were added. Three (2.5%) secondary outcomes were upgraded to a primary outcome. The timing of assessment for a primary outcome changed eight (6.8%) times. Thirty-one major discrepancies were published with a P-value and twenty-five (80.6%) favored statistical significance.”

    “Our results suggest that outcome changes occur frequently in hematology
    trials. Because RCTs ultimately underpin clinical judgment and guide
    policy implementation, selective reporting could pose a threat to
    medical decision making.”

    If you demote your primary stated outcome to somewhere in the middle of the third page, or deep within supplemental data, you can pretty reasonably be said to have not published it.

  • I’m currently looking at the clinical trials reporting performance of top UK universities and have found two examples of the same trial listing different pre-defined outcomes on different registries. The registry entries are not cross-linked and the related journal articles only contained one of the two trial ID numbers. So somebody could register the same trial twice in two registries with two sets of outcomes, see how the trial goes, and then choose to write a journal article based on whichever set of outcomes yields “stronger” results and then reference that trial ID alone in the abstract.

    There is no indication that discrepancies were deliberately created for this purpose in the two trial entries I reviewed, but it would be good if future studies could also look at outcome inconsistencies between trial registry entries (both between different registries, and between the protocols and results sheets for the same trial on Clinicaltrials.gov) rather than only between registry entries and the academic literature.

    A second issue that may merit greater study is the incredibly sloppy management by university sponsors of their data in trial registries, which is often so weak that it defeats the whole purpose of having registries in the first place. Some examples here:
    https://www.transparimed.org/single-post/2017/05/21/Aberdeen-Uni-Pledges-Audit-of-Clinical-Trials-Transparency-Performance

  • Jon Brassey

    I’d be interested in understanding Paul or Iain’s perspective on the implications – of unpublished trials –
    on systematic reviews (SRs). You highlight above that reviewers are restricted to a biased subsample of studies. This is confirmed in papers such as Schroll’s 2013 BMJ paper (http://www.bmj.com/content/346/bmj.f2231.long).

    When we explore the evidence of the effects of SRs based on published studies versus all studies you can get massively divergent results. For instance Turner reported that using published journal articles tended to overestimate the effects of intervention, on average, by over 30% (http://www.nejm.org/doi/full/10.1056/NEJMsa065779). This finding was not isolated, see also the 2012 review by Hart in the BMJ (http://www.bmj.com/content/344/bmj.d7202.long).

    I completely agree about the need to publish all trials – that’s not the issue.

    The evidence indicates that SRs, based on published journal articles, cannot be relied upon to be ‘accurate’ (relative to one based on all studies) so how do they see SRs in relation to reliability in relation to supporting decision making in the health care setting?

  • Paul Glasziou

    As we state, our estimate is for non-publication of *studies*, of which clinical trials are only a small fraction. Jacobs “recent study” of trial publication estimate is limited to “new medicines approved by the European Medicines Agency (EMA) during 2013”, so does not include the many trials of drugs that fail to seek approval and hence is not directly comparable to the more comprehensive estimates we cite. Rogerblack raises the valid point that outcomes are often omitted or swapped, causing further distortion and waste; this problem is included in our overall “85% waste” estimate, which arises from the combination of: avoidable design flaws, non-publication, and poor reporting – http://blogs.bmj.com/bmj/2016/01/14/paul-glasziou-and-iain-chalmers-is-85-of-health-research-really-wasted/

  • Adam Jacobs

    Yes, for sure, I’m sure you’re right that there is a big difference between clinical trials of medicines and “studies” more widely, including epidemiological research. I’m not aware of any good estimate for studies outside of clinical trials, but it wouldn’t surprise me in the slightest if non publication rates were considerably higher than the 10-20% that we typically see these days for clinical trials.

    Thanks for clarifying the scope of your article.

  • jhnoblejr

    Where is the empirical evidence to support the assertion, ” . . . investment in biomedical research is cost-effective and serves the interests of the public.” Is the truth of the proposition supported by at least one study amidst the entire universe of reported and unreported studies that produces a valid and useful innovation sufficient verification? My own faith-based stance is to agree–no matter how many dollars it takes to produce that one study.

  • David King

    It is a shame some of the reasons for not publishing research have not been considered or further explored. As an early stage researcher I would be happy to publish my negative results. But which journal will accept them? The BMJ is quite happy to publish an article lamenting all the unpublished data hidden away by supposedly erstwhile academics but when was the last time they or the myriad of publications they profit from published something negative. The mainstream journals do not publish negative studies because no-one reads them or cites them. The only journals which accept them are the predatory “open access” ones and they demand a hefty publication fee to do so.

    And the next time I apply for a grant how interested will the panel be in all the things I’ve shown are unlikely to be clinically beneficial. They want positive findings in high impact journals and if I haven’t got them my academic career will be dead in the water.

    Researchers may well not publish negative studies but this is because the entire academic process is geared towards positive results. Until this changes the results of negative studies will continue to be suppressed.