By Emily Sena @drEmilySena
This week’s The BMJ includes a collection of articles on preclinical animal studies! As editor-in-chief of BMJ’s first ‘science’ journal it’s exciting to see the conversation about the validity of preclinical animal studies and the manner with which they inform the development of new treatments for patients feature in our big sister journal. The collection focuses on one example, the development of a new vaccine, MVA85A, designed as a booster to BCG vaccine to improve the prevention of tuberculosis. It didn’t. The central investigative article by Deborah Cohen that tells the story discloses me as a conflict of interest. My day job is a meta-researcher of preclinical studies and I was indeed involved. I want to discuss how this and similar experiences have, in part, informed my editorial approach at BMJ Open Science.
A systemic problem
In September 2013, I was sitting in the departure lounge of Montréal’s international airport en route to Edinburgh after the 21st Cochrane Colloquium. I had given a talk titled “Systematic Reviews of Neurological Disorders: Evidence for the Impact of Bias”. Opposite me was a man I recognised from the meeting, he had heard my talk and we started chatting. By the time I arrived back in the UK Paul Garner was a new collaborator and came with an interesting project – a systematic review of the effects of MVA85A vaccine on tuberculosis. We also discovered that we were both BMFers.
At that point, my research had primarily focused on understanding the limitations to experimental design and reporting in the modelling of neurological disorders. I conducted systematic reviews of preclinical studies to inform how we may improve the likelihood of translation. When published, the MVA85A review was the 27th systematic review of preclinical research that I had co-authored and sadly the low reporting quality did not stand out to me. Few of the primary studies that I have critically appraised over the years report measures to reduce risks of bias such as randomisation, blinding and sample size calculations. Cohen’s report correctly states that this MVA85A ‘saga’ is but one example of the systematic failure afflicting the translation of preclinical research. This I wholeheartedly agree with, and I have a stack of empirical evidence from a range of preclinical disciplines that support this assertion.
The specifics of this MVA85A story are substantially more complex and go beyond limitations in the reporting quality of preclinical research. However, it does serve as an excellent reminder that the way in which we undertake preclinical research can have a tangible impact on the people we purport to be serving with our research endeavours.
What are we doing to address these problems?
The accompanying editorials to Cohen’s investigative report are a call to arms. As a community we need to not only improve the design, reporting and transparency of animal studies, but to preregister them too. From Cohen’s report, one of the major areas of contention is the purpose of a Macaque study in which MVA85A was trialled and not effective. Essentially, there is uncertainty about whether the study was performed to test vaccine efficacy or a new aerosol challenge model of TB. Had a protocol been published, or even better had the study been conducted as a registered report, this uncertainty would be moot. Of course, back when this specific Macaque study was performed I’m not aware of a platform to publish a protocol or registered report of preclinical research. At BMJ Open Science we have policies directly targeted to these endeavours. Our purpose is to ensure confidence in the research that we publish.
Generally, people inherently do not welcome unsolicited objective critical appraisal of their scientific practice. But I do it because I think we can learn from what has and has not worked to make preclinical research fit for purpose. This is why we accept meta-research. I’m glad to say that things are improving; but we have a long way to go. As Editor-in-Chief I have had the opportunity to create a platform that values and promotes research of high methodological quality. In addition to distinguishing between exploratory and confirmatory original research, and accepting registered reports and protocols, we mandate open data, request submission of an ARRIVE checklist and ask additional methodological questions at submission. Authors are specifically asked about randomisation, sample size calculations, blinding and inclusion/exclusion criteria. This is not a criteria for acceptance but asked for reasons of transparency; allowing readers to make objective inferences taking into consideration strengths and limitations.
These are just some of the steps we are taking to underscore the validity of the preclinical research we publish. Biomedical journals are one stakeholder in the research pipeline. Researchers, ethics committees, funders and institutions also need to value and reward approaches seeking to improve the validity and reproducibility of preclinical research. Ritskes-Hoitinga and Wever call for cultural change. I am optimistic that we are moving in the right direction. In the meantime, at BMJ Open Science, we will continue to implement policies to facilitate positive change; some are ambitious but I think well worth the effort.
Emily Sena (ORCID ID: 0000-0002-3282-8502) is Editor-in-Chief at BMJ Open Science and a research fellow at the University of Edinburgh. She co-authored the original MVA85A preclinical systematic review.
Conflicts of interest: Emily Sena is Editor-in-Chief of BMJ Open Science and a co-author on the MVA85A preclinical systematic review referred to in this article.