By Chris Chambers, @chrisdc77
In 2015 the UK Academy Medical of Sciences published a landmark report documenting the current state of reproducibility in preclinical and biomedical research. The analysis makes for sobering reading. Preclinical science is littered with small, biased studies, underspecified procedures, and poor statistical methods, driven by perverse career incentives that prioritise quantity over quality and storytelling over accuracy. Editors at major journals enforce publication bias in favour of positive, exciting results while institutions are failing to train the next generation of scientists in how to avoid and solve these problems. The upshot is that most preclinical literature may be unreliable.
The downstream consequences of this failure are severe. Two months after the Academy published its report, an economic study placed the cost of poor reproducibility at $28 billion in the US alone, sabotaging the prospects of clinical translation and hampering drug discovery. In the wake of so many failed clinical trials, Pfizer recently announced the termination of its neuroscience discovery programme, including treatments for Parkinson’s and Alzheimer’s disease. It would be unfair to lay the blame for this decision solely at the feet of preclinical researchers, but it is impossible to ignore low standards in preclinical science as a contributing factor.
This bleak state of affairs prompts us to ask: what drives poor reproducibility and how can we fix it? The causes are complex but all reduce to the same fundamental problem, that the life sciences are traditionally a closed system. Preclinical study protocols are rarely published, allowing forms of analytic and reporting bias (such as p-hacking) to run amok, while data and crucial study materials are seldom made available to the public or other scientists. Meanwhile, career incentives enforce questionable practices by prioritising quantity of publications over quality. Curing these ills requires us to provide the tools for researchers to embrace open practices and the incentives to use them.
This, at its heart, is the core mission of BMJ Open Science – to provide a high profile outlet for preclinical science that achieves the highest standards of quality, rigour and transparency. The journal mandates open data and offers Badges for Open Practices including study preregistration, open data and open materials. It also enforces guidelines for thorough reporting of methodological practices such as blinding, randomisation, and sample size determination, and welcomes exploratory studies that seek to generate rather than test hypotheses.
Crucially, BMJ Open Science also offers Registered Reports – a radical reshaping of the publication process. Registered Reports are a new type of research article in which the study protocol undergoes peer review before research is undertaken. Protocols that address the most important questions using the most robust methods are then accepted in advance, regardless of how the main results may turn out, eliminating publication bias and other forms of reporting bias. Once the data are in, the full paper is re-reviewed to ensure that authors followed the approved protocol and that the conclusions are based on the evidence.
Registered Reports are rapidly expanding in mainstream journals, to date being offered by 91 journals and rising. They are highly cited – on average above the impact factors of their respective outlets – while also liberating researchers from the pressure to generate certain kinds of results, or to tell certain kinds of stories, to achieve publication. Though rigorously reviewed both before and after results are acquired, the two-stage review process prevents goalpost shifting from reviewers who may be dissatisfied with certain findings or who may demand that authors perform long lists of additional experiments or post hoc analyses. While authors are welcome to report unregistered analyses in a Registered Report, they are usually not required. For a Registered Report, accuracy and transparency – not story-telling – are placed front and centre.
Registered Reports are suitable only for hypothesis-driven research and are not a one-size-fits-all solution for science. They are, however, a powerful tool for aligning the incentives that motivate individual scientists with the wider needs of the scientific community. Situated within the broader framework of editorial policies at BMJ Open Science, we believe Registered Reports will help ensure that preclinical science is match-fit for translation.
Chris Chambers is the Registered Reports editor at BMJ Open Science, chair of the Registered Reports committee at the Center for Open Science, and professor of cognitive neuroscience at Cardiff University.
Conflict of interest: Chris Chambers is the Registered Reports editor at BMJ Open Science