A diverse collection of people, ranging from medical journal editors to the head of the clintrials.gov trial registration website gathered at the third meeting of the SPIRIT initiative in Toronto on October 2. SPIRIT stands for “standard protocol items for randomised trials.” The goal of the initiative is to improve the quality and completeness of trial protocols. As the executive summary for the project says, protocols “have become increasingly important for transparency and evaluation of trial results.” The first consensus meetings of the SPIRIT initiative were held in 2007 and 2009. They resulted in a checklist for trial protocols and an explanatory paper still in development. The focus of this third meeting was to develop a “knowledge translation strategy” for the project – in other words, to figure out how best to share information about this new guideline with authors, editors, and others involved with the design and reporting of randomised trials.
It was my first time with the group – deputy BMJ editor Trish Groves had participated in the first two meetings. I was pleased to attend. As a research editor I spend a lot of behind-the-scenes time trying to reconcile what is reported in research papers describing clinical trials with what the researchers planned to do when the trial began. In most cases, those intentions are best identified by a look at the study protocol, but it’s remarkably hard to get one’s hands on that, even though the BMJ requires protocols for all randomised trials. Once received, many protocols aren’t all that helpful. Thus I arrived at the SPIRIT meeting keen to help publicise and implement anything that will help improve the quality of protocols.
There are many reasons why clinical trial protocols, as they currently exist, are not much help when it comes time to evaluate the results of a study. For starters, the BMJ gets a large number of papers from non-English-speaking countries. Protocols for those studies might be in Dutch, Swedish, Chinese, or other languages and require translation. And then there is the question of which version of the protocol is needed. We often tell authors we’d like to see the protocol submitted to the ethics review board when the study was originally approved. It’s not possible to verify the date of any such document, however, and we don’t really know whether the protocol we get was prepared before the study started, or last week after we requested it.
Additionally, an initial or early study protocol will not reflect the many modifications that a study has undergone over time. Protocols are “living documents” said one of the SPIRIT attendees. Changes to protocols may be made in response to matters raised by the ethics committee review, by the practicalities of trial design, funding, and conduct, or both. Most are minor but some are substantive. These include such things as alterations in study inclusions or exclusion criteria, a change in the sample size calculation or primary outcome measure. And finally, there may be multiple, slightly different versions of a protocol in a multi-center study where local review board approval is required for each site.
The SPIRIT checklist will require the a priori declaration of core study information and analysis plans, and is specific about the level of detail that is necessary. This will make it much more difficult to conceal or manipulate a trial’s results. At present, a bare-bones record of a clinical trial’s design at conception may exist on a trial registration site such as clintrials.gov. Some of these sites also track the data and nature of any changes made to the registration. But the degree of detail on these sites leaves a lot to be desired. Primary outcome measures are listed, for example, but the fine points are often missing. A study randomising patients to statins or placebo, might list “myocardial infarction” as an outcome, but say nothing about how it will be defined, at what point it will be measured, or whether the analysis will be done using time to event or some other sort of methodology. This leaves a lot of room for later manipulation.
The checklist that will be proposed by the SPIRIT group won’t solve all of these problems, but it is a very good start. As Dr. An-Wen Chan, one of the leaders of the project has pointed out, we have reporting standards and checklists for all phases of clinical trials except the protocol stage, which is arguably the most important. There are certainly many barriers to the full adoption of the SPIRIT guidelines. Their implementation is undeniably more complex than that of reporting guidelines such as CONSORT or STROBE.
No one at this third meeting thought the business of “knowledge transfer” would be quick or simple. I think it’s fair to say, though, that we all shared a strong belief in the power of reporting guidelines to make an important and demonstrable difference in the quality of published reports. (We agreed, however, that it would be important to verify that this occurred.) Other reporting guidelines such as CONSORT and STROBE have resulted in a remarkable improvement in the quality of papers published not just by the BMJ or other major journals but also at the smaller subspecialty journals such as Headache and Cephalalgia where I work as a volunteer associate editor. Discrepancies between protocols and published trial results are distressingly common; adoption of the SPIRIT guidelines should eventually make it easier to understand the reasons for any differences.
Elizabeth Loder is the BMJ’s US based clinical epidemiology editor.