Paul Glasziou and Iain Chalmers: Can it really be true that 50% of research is unpublished?

Whatever the precise non-publication rate is, it is a serious waste of the roughly $180 billion annually invested in health and medical research globally

paul_glasziou2Iain-ChalmersIf 50% of mail we posted never arrived, the outcry would be considerable. Although current estimates are that about half of research goes unpublished, there is little outcry. Maybe that is because the results of research projects are not addressed to a specific person who would notice when they hadn’t arrived; or maybe some think the situation isn’t as bad as implied by the 50% estimate.

Rates of publication have been documented best for clinical trials, particularly since trial registration at inception became more widespread over the past 20 years. In the 1980s and 1990s estimates of trial publication rates were derived from retrospective cohort studies of trial proposals submitted to ethics committees, and from specialist trial registers. In this century, however, mandated trials registration has enabled much larger cohorts of trials to be investigated.

So is the 50% estimate still true for trials with the increased expectations of registration and reporting? And because trials constitute only a small proportion (2-3%) of all biomedical studies, is the 50% figure true for other types of research?

The key obstacle to answering these questions is knowing about all the unpublished research—research’s “dark side of the moon.” At least three methods have been used to estimate the proportion of unpublished studies, using as denominators cohorts of all studies: (i) studies seen by specific ethics committees, (ii) studies presented at specific conferences, or (iii) studies pre-registered in registries. None of these methods captures all studies—not all studies require ethics approval, not all are presented, and few have to be registered. In summary, all methods tend to underestimate the non-publication rate. A recent overview. by Schmucker et al. of 17 cohorts of studies approved by research ethics committees (RECs) found that, on average, 46% were published; and among analyses of 22 studies included in trial registries it was found that, on average, 54% were published. In summary, slightly less than half of the studies (trial and non-trial) approved by ethics committees had been published, and slightly more than half of pre-registered controlled trials had been published.

Some of the studies reviewed by Schmucker et al were quite old however—so do those estimated publication rates still apply? Well, the most relevant recent large study by Chen et al found similar results: of the 4,347 clinical trials registered in, 2,458 (57%) had been published and 2,892 (67%) had been either published or results reported without journal publication. The 10% that were reported but not formally published in journals is noteworthy. Chen et al found that 27% had results reported on, which provides fields and support for such reporting (and is mandated for US trials). So the bad news is that the rate of publication in journals seems unchanged, but the good news is the results of an additional 10% are available in trials registries. TrialsTracker is attempting to automate the monitoring of publication rates, and provides a breakdown by sponsor. Its current analysis of 29,377 eligible trials found a 55% publication rate (that is, 45% missing).

Maybe it’s only small or poor studies that go unpublished? The best analysis of that possibility found that rates varied little by country, size of trial, or trial type. Unfortunately, the best predictor of publication seems to be whether the study is “positive” or “negative,” which means that the half of the research results we can access is biased. So there is both waste and distortion.

For animal studies, and other pre-clinical studies, we know much less, both because study registries are very rare, and because mandatory ethics clearance is patchy. A survey of animal researchers has reported that they thought that 50% were unpublished, but little direct evidence exists.

Whether the precise non-publication rate is 30%, 40%, or 50%, it is still a serious waste of the roughly $180 billion annually invested in health and medical research globally. Non-publication means that researchers cannot replicate or learn from the results found by others—particularly the disappointments, which are less likely to be published. Funders deciding on the gain from new research cannot base that decision on all previous research. Reviewers trying to summarize all the research addressing a particular question are limited by access only to a biased subsample of what has been done.

Although there has been some modest progress in reducing biased under-reporting of research, efforts are still needed to ensure that that all trials registered and reported, and to extend those principles to all studies. A prerequisite for achieving these objectives will be a better understanding of the causes of, and cures for, non-publication.

P.S. Despite the considerable avoidable waste in medical research, from non-publication and other causes, investment in biomedical research is cost-effective and serves the interests of the public. Working to reduce waste to improve the return on investment is important, however, and should not be used as reason to reduce support for medical research, as recently proposed by US President Donald Trump, but sensibly rejected by Congress.

Paul Glasziou is professor of evidence based medicine at Bond University and a part time general practitioner.

Competing interests: None declared.

Between 1978 and 2003, Iain Chalmers helped to establish the National Perinatal Epidemiology Unit and the Cochrane Collaboration. Since 2003 he has coordinated the James Lind Initiative’s contribution to the development of the James Lind Alliance, the James Lind Library, Testing Treatments interactive, and REWARD.

Competing interests: IC declares no competing interests other than his NIHR salary, which requires him to promote better research for better healthcare.


Schmucker C1, Schell LK1, Portalupi S1, et Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One. 2014 Dec 23;9(12):e114023.

Chen R, Desai NR, Ross JS, Zhang W, et al.  Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016 Feb 17;352:i637.

Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009 Sep;6(9):e1000144.

ter Riet G, Korevaar DA, Leenaars M, Sterk PJ, Van Noorden CJF, Bouter LM, et al. (2012) Publication Bias in Laboratory Animal Research: A Survey on Magnitude, Drivers, Consequences and Potential Solutions. PLoS ONE 7(9): e43404.