Jamie Kirkham: Mitigating the problem of outcome reporting bias

The poor reporting of clinical studies indicates a collective failure of authors, peer reviewers, and editors on a massive scale

Last year the president of the Royal Statistical Society, David Spiegelhalter, observed that “questionable practices such as cherry-picking data and ‘hacking statistics’ to make findings appear more dramatic threatens to undermine public trust in science.” The idea that researchers may be selecting and reporting only the most “impressive” results from a multitude of outcomes or analyses has been raised as a concern since the mid 1990s.

Moreover, in the landmark Lancet series about waste in research, it was suggested that over 50% of planned study outcomes are not reported. This represents a huge waste of research, which in turn could result in the loss of tens of billions of pounds invested in research, and all because of correctable failures in the reporting of research evidence.

This selective non-reporting of outcomes in clinical studies can lead to bias when outcome results are selected based on knowledge of the results, and has been shown to affect the conclusions of a substantial proportion of Cochrane systematic reviews. This form of bias is commonly referred to as outcome reporting bias.

Since the turn of the millennium, a number of groundbreaking initiatives have been launched to mitigate the problem of outcome reporting bias. Perhaps the single greatest advance to help detect and deter outcome reporting bias is trial registration. The prospective registration of trials was recently found to be associated with both the publication of trial results, and publication without discrepancies in outcomes.

Journal editors, regulators, research ethics committees, funders, and sponsors should implement policies mandating prospective registration for all clinical trials. There is already evidence that this is happening: in 2004 the International Committee of Medical Journal Editors (ICMJE) adopted this clinical trial registry policy, while the World Health Organization (WHO), through its International Clinical Trials Registry Platform (ICTRP), also advocates the registration of all interventional trials as a “scientific, ethical, and moral responsibility.”  

The poor reporting of clinical studies indicates a collective failure of authors, peer reviewers, and editors on a massive scale. But this omission may not always be made with intent. Researchers may not know what information to include in a report of research, and editors may not know what information should be included.

Reporting guidelines have attempted to address this with some success. However, with the first introduction of reporting guidelines occurring over 20 years ago, empirical evidence would suggest that this success is limited and slow.

The temptation for researchers to “cherry pick” outcomes perhaps stems from the plethora of potentially measurable outcomes for some conditions. As an example, in a review of 21 trials of gabapentin (an anti-epileptic drug) 214 outcome definitions were identified. The development and use of core outcome sets has the potential to reduce the risk of outcome reporting bias. The COMET (Core Outcome Measures in Effectiveness Trials) Initiative was launched in 2010, and is facilitating the development and application of such standardised sets of core outcomes for clinical trials involving people with specific conditions.

Despite these warnings, I hope that such initiatives (and here I name only a few) or the combinations of these efforts will reduce the issue of poorly reported outcome data, which can lead to outcome reporting bias. However, taking a more pessimistic view, it is unlikely that these initiatives will completely eradicate this problem, and it remains a concern for the secondary analyses of data—as is the case with systematic reviews.    

In a recently published Research Methods and Reporting article in The BMJ, my colleagues and I provide a tutorial to inform systematic reviewers how to identify missing outcome data in their reviews, and provide useable tools (an outcome matrix generator) to display this information transparently within their review manuscripts.

We also provide a classification system for assessing the risk of outcome reporting bias for both benefit and harm outcomes, which were derived from two large empirical studies. A sensitivity analysis approach that adjusts for the presence of outcome reporting bias in review meta-analyses has also been developed, which utilises these risk of bias assessments, and can be implemented easily via a web based platform.

The ORBIT (Outcome Reporting Bias in Trials) website provides access to these tools, as well as being a repository for important publications in this field. It can be a useful resource for systematic reviewers when dealing with missing outcomes.    

Jamie Kirkham, senior lecturer, MRC North West Hub for Trials Methodology Research, Department of Biostatistics, University of Liverpool, UK.

Competing interests: See linked research paper.