If you asked a member of the public “Should researchers review relevant, existing research systematically before embarking on further research?” they would probably be puzzled. Why would you ask a question with such an obvious answer? But in the current research system, researchers are only rarely required by research funders and regulators to do this.
The most extensive relevant analysis found that published reports of trials cited fewer than 25% of previous similar trials. Furthermore, many researchers appear unaware of existing systematic reviews of research. For example, a review of 136 new trials of propofol injections to reduce pain concluded that three quarters were clinically irrelevant because they had neither used the most efficacious intervention as a comparator, nor included children. Unnecessary duplication and failure to build on the methods of previous research is resulting in considerable waste in research.
Why does this occur? Given the global nature of research, even researchers active in a particular area are often unaware of studies addressing the same or similar questions to those that interest them, but they are not necessarily skilled in searching for these previous studies. These deficiencies are important because unidentified studies can help to inform decisions about: whether further research is needed, which questions are unanswered, and how additional research can be designed to take account of the lessons from relevant previous research. Such systematic reviews may yield several kinds of evidence:
A. Good evidence that any plausible effects of an intervention are unlikely to be worthwhile. This would suggest that no further studies are warranted unless there are clear, unanswered, related questions. For example, a review of calcium channel blockers for acute stroke concluded that: “Further studies in acute ischemic stroke with calcium antagonists acting on voltage sensitive calcium channels do not seem to be justified.”
B. Good evidence of worthwhile effects of an intervention. This would suggest implementing the intervention in practice, but that remaining uncertainties may also warrant further research, for example, to identify optimal doses or durations of treatment.
C. Modest evidence of clinically worthwhile effects, but remaining uncertainty. This would suggest that it would be worth funding a larger, better study (but NOT funding additional small studies)—the decision taken, for example, by the English Health Technology Assessment programme to support assessments of the effects of steroids for Bell’s Palsy, and of auto-inflation for glue ear.
D. No evidence, or evidence from sparse, small, poor quality studies. This would prompt consideration of whether there is enough other evidence to warrant a first clinical trial, for example, by doing systematic reviews of relevant pre-clinical studies or early phase clinical studies. For example, a review of laetrile for cancer found no trials, but the reported case series yielded no evidence of substantive benefit and some evidence of harms.
These are examples of some common outcomes of systematic reviews. Additional outcomes are learning points from previous studies—for example, about choice of outcome measures, questionnaires, intervention details, recruitment methods, etc.
It has been noted that systematic reviews can be misleading if they are done sloppily or interpreted incautiously, and that this might contribute to waste of research effort. However, apart from the rationale for reviewing systematically what is already known illustrated above, this process is almost always much less expensive than doing inadequately informed primary studies. As streamlining and automation of systematic reviews reduces their costs and time to completion, research funders seem likely to require applicants seeking support of new primary research to implement policies similar to that introduced by the National Institute for Health Research:
“Where a systematic review already exists that summarises the available evidence this should be referenced, as well as including reference to any relevant literature published subsequent to that systematic review. Where no such systematic review exists it is expected that the applicants will undertake an appropriate review of the currently available and relevant evidence.”
Paul Glasziou is professor of evidence based medicine at Bond University, and chairs the International Society for Evidence Based Health Care. His research focuses on improving the clinical impact of research. As a general practitioner, this work has particularly focused on the applicability and usability of published trials and systematic reviews.
Competing interests: None declared.
Between 1978 and 2003, Iain Chalmers helped to establish the National Perinatal Epidemiology Unit and the Cochrane Collaboration. Since 2003 he has coordinated the James Lind Initiative’s contribution to the development of the James Lind Alliance, the James Lind Library, Testing Treatments interactive, and REWARD.
Competing interests: IC declares no competing interests other than his NIHR salary, which requires him to promote better research for better healthcare.