You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Threats to traditional systematic reviews

15 Jun, 16 | by BMJ Clinical Evidence

JRB profile picture v2By Jon Brassey

For many years systematic reviews have been placed on a pedestal, relatively free from critical scrutiny. Frequently seen as being at the top of the ‘evidence pyramid’ they have been adopted as the main way of assessing the worth of an individual intervention.

More recently threats to the pre-eminence of systematic reviews have come from multiple areas. Some authors, including myself, have been critical of groups such as Cochrane for creating methods that are so costly in terms of finance and time that too few are done and the majority are not being kept up to date.

The rise of the ‘rapid review’ is another ‘threat’ to traditional systematic reviews, as increasingly these are being seen as viable alternatives. And, as the rapid review methods mature, they will surely win prominence by their ability to deliver robust results in a fraction of the time of traditional systematic reviews, at a lower cost and be better able to be kept up to date.

However, an increasingly obvious threat is that of reporting bias. Reporting bias, the selective reporting or suppression of information, is increasingly apparent and the evidence mounts as to the effects. There are numerous problems associated with this, for instance:

  • AllTrials reports that over 30% of trials are unpublished, and when unpublished trials are used in evidence synthesis it can profoundly alter the results of the systematic review. The problem is that the vast majority of systematic reviews do not include all or any of the unpublished studies.
  • The basis for most systematic reviews is the journal article, and therefore these summaries of the trial miss lots of important information such as side effects and outcome switching.

The net result is that, for systematic reviews based on journal articles, the results simply cannot be trusted as being an accurate reflection of an interventions ‘worth’.[1] [2] [3] Being generous we could describe them as supplying a ‘ball park’ estimate; synthesis of the published evidence alone doesn’t support more than that. While some systematic reviews might be accurate, we have no real way of knowing which are accurate and which aren’t. So, if the evidence synthesis is based on published journal articles (the overwhelming majority) – beware.

But this brings us nicely back to the role of rapid reviews. There are a few studies comparing rapid and systematic reviews (based on journal articles) and these have consistently reported very little difference in results.[4] [5] It appears that a sample of published journal articles gives roughly the same results as all the journal articles found in a systematic review (is this really surprising given that sampling is a widely accepted part of biomedical trials?). So, if all you need is a ball park estimate, do it quickly and at low cost. However, if you want an accurate result you really need to go beyond published journal articles. Systematic reviews (based on published journal articles) are caught between two stools, they are not quick enough and they’re not robust enough.

This realisation will surely help us move towards a more nuanced approach to evidence synthesis, one not rooted in attempts to capture all journal articles. This new approach must better articulate why the evidence synthesis is required and build from there. And, the new approach(es) must be based on evidence, not faith.

 

Jon Brassey is the founder and director of the EBM search engine the Trip Database. In addition to this he works as lead for knowledge mobilisation at Public Health Wales, is an honorary fellow at the Centre for Evidence-Based Medicine, Oxford and recently started the Rapid-Reviews.info website.
He will be on the panel for a discussion of “Improving the Evidence for Systematic Reviews” on Wednesday 22nd June at Evidence Live 2016

 

References

1) Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH et al. N Engl J Med. 2008 Jan 17;358(3):252-60.
2) Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses Hart B et al. BMJ. 2012 Jan 3;344:d7202.
3) Oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments. Jefferson T et al. BMJ 2014; 348
4) McMaster Premium LiteratUre Service (PLUS) performed well for identifying new studies for updated Cochrane reviews Hemens BJ, Haynes RB. J Clin Epidemiol. 2012 Jan;65(1):62-72.e1
5) A pragmatic strategy for the review of clinical evidence Sagliocca L et al. J Eval Clin Pract. 2013 Aug;19(4):689-96.

 

By submitting your comment you agree to adhere to these terms and conditions
You can follow any responses to this entry through the RSS 2.0 feed.
BMJ Clinical Evidence Blog homepage

BMJ Clinical Evidence

Clinical Evidence is a database of systematic overviews on the effectiveness of key interventions, together with tools and resources to learn and practise EBM. Visit site



Creative Comms logo

BMJ Clinical Evidence latest news

BMJ Clinical Evidence latest news