This blog site has been archived

What makes a systematic review “complex”?

 

Originally published on BMJ Opinion Kamal R Mahtani, Tom Jefferson, and Carl Heneghan discuss:  What makes a systematic review “complex”?

Systematic reviews involve systematically searching for all available evidence, appraising the quality of the included studies, and synthesising the evidence into a useable form. They contribute to the pool of best available evidence, translating research into practice, and are powerful tools for clinicians, policymakers and patients.

However, as primary research evidence grows in volume and complexity, major problems with the publishing, reporting, and interpretation of many of these studies arise. In some cases, much of the evidence remains hidden from view, or when published, important outcomes are selectively reported, thus hindering synthesis. Also, multiple interventions often have not been compared head to head, requiring more complex and indirect methods of evaluation. Finally, best practice guidance must be adaptable to real world practice scenarios, which often requires combined information from multiple sources of evidence.

To take account of these challenges, methods of evidence synthesis are having to evolve beyond conventional “what works” systematic reviews. Cochrane groups have expanded, with the creation of specific methods groups that support reviews of individual participant data (IPD), screening and diagnostic tests, qualitative and implementation data, prognosis studies, non-randomised studies, and rapid reviews. Other forms of evidence synthesis, such as scoping reviews, realist reviews, meta-narrative reviews, network meta-analysis, and evidence mapping, have emerged to plug the evidence-to-practice translational gap. Research funders are recognising and supporting the production of these evolving and more complicated synthesis methods—an example being the £2 million invested by the National Institute for Health Research (NIHR) in the Complex Reviews Support Unit (CRU).

There is growing recognition that complex clinical and policy questions require more advanced evidence synthesis methods to answer them. This recognition has led to the term “complex review”—now seemingly used to cover a wide range of evolving systematic review methods. However, what is a complex systematic review?

We were unable to find a definition on the CRU website, although we found a list of reasons for the increasing complexity of reviews. We conducted a rapid scope of the literature. Gough and colleagues describe “multi-component reviews“ as a strategy for dealing with complex review questions, but we were still unable to identify an explicit definition of a complex review. We found a definition of a “comprehensive review” in the Joanna Briggs methods manual: “A systematic review is considered to be a comprehensive systematic review when it includes two or more types of evidence, such as both qualitative and quantitative, in order to address a particular review objective.” We find this helpful, but insufficient. We identified an excellent series of articles under the banner “Considering Complexity in Systematic Reviews of Interventions” in the Journal of Clinical Epidemiology, although we were unable to identify an explicit definition of a complex review. Lastly, we identified a paper by Whitlock and colleagues, who describe a complex review as one that “evaluate(s) a number of linked clinical questions, multiple interventions or diagnostic tests, different or distinct population groups, and/or many outcomes.” We found this definition the most useful.

Given the growing investment and interest in complex systematic reviews, we need to start a discussion that should lead to clarification of what is meant by a “complex review.”

We propose the following definition for a complex review: A systematic review consisting of multiple components, large amounts of data from different sources, and different perspectives, united or connected, collectively contributing more than would be expected from their individual contributions; the components are not easily coordinated, analysed, or disentangled.

Furthermore, we propose that for a review to be designated a complex review, it should fulfil at least two criteria from the list below:

• The use of mixed (i.e. qualitative and quantitative) methods. This could include assessment of a diagnostic tool and connected evaluation of the patient experience from using the tool, in the same review.

• The inclusion of a large quantity of data (we propose that authors justify whether the quantity of data they report can be considered large). This, for example, could include clinical study reports (CSRs) as a source of data. CSRs are large and highly detailed documents written for regulators as part of a licensing application for a medicinal product.

• Assessment of a complete evidence development programme. For example, a review assessing phase I to IV studies of a technology, or evidence of the development of a complex instrument or large diagnostic tool from the horizon scanning phase to the post-marketing phase, perhaps starting with a definition of its predicate (i.e. the ascendant molecule, instrument, or device from which all other versions and medicinal products have evolved).

• Systematic inclusion of data from several different sources: primary data, literature (published/grey), regulatory, or registry data, all requiring careful handling and skilled analysis.

• Inclusion of both the index intervention and contextual information to describe the process through which the intervention is being implemented or to explain the rationale for registration of the choice of study design in the light of knowledge about the topic at the time. Both of these types could include timelines.

• Incorporation of different perspectives and viewpoints: societal, healthcare, funders, patients and carers, healthcare workers, manufacturers. These could include multiple evaluations and comparisons using different evaluation methods consistent with the different perspectives.

• Reviews in which a particular method is used for the first time.

• Necessitates a highly skilled and multidisciplinary team to complete the review.

Systematic reviews play a vital role in the translation of research findings into practice. However, the field is moving beyond traditional “what works” reviews. This has been recognised by the UK chief scientific officer, Chris Whitty, who has thrown down a gauntlet in stating that “If the academic community as a whole could do one thing to improve the pathway from research to policy, it would be to improve the status, quality, and availability of good synthesis.”

We pick up that gauntlet, and present a proposed definition and criteria for a complex review. If you think that other aspects need to be incorporated, we welcome your comments.

Kamal R Mahtani is a GP and deputy director of the Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford. He is also the director of the evidence based healthcare MSc in systematic reviews and co-leads a module on “complex reviews.”
You can follow him on Twitter @krmahtani

 

Tom Jefferson is a senior associate tutor and honorary research fellow, Centre for Evidence Based Medicine, University of Oxford. He also co-leads a module on “complex reviews.”

 

 

Carl Heneghan is a GP and director of the Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford.
You can follow him on Twitter @carlheneghan

Disclaimer: The views expressed in this commentary represent the views of the authors and not necessarily those of the host institution, the NHS, the NIHR, or the Department of Health.

Acknowledgements: Jeffrey Aronson for helpful discussions.

Competing interests:

KM receives funding from the NHS NIHR SPCR for the Evidence Synthesis Working Group and is director of a MSc in systematic reviews.

TJ was a recipient of a UK National Institute for Health Research grant for a Cochrane review of neuraminidase inhibitors for influenza. In addition, TJ receives royalties from his books published by Il Pensiero Scientifico Editore, Rome and Blackwells. TJ is occasionally interviewed by market research companies about phase I or II pharmaceutical products. In 2011-13, TJ acted as an expert witness in litigation related to the antiviral oseltamivir, in two litigation cases on potential vaccine related damage, and in a labour case on influenza vaccines in healthcare workers in Canada. In 2014 he was retained as a scientific adviser to a legal team acting on oseltamivir. TJ has a potential financial conflict of interest in the drug oseltamivir. In 2014-16, TJ was a member of three advisory boards for Boerhinger Ingelheim. He is holder of a Cochrane Methods Innovations Fund grant to develop guidance on the use of regulatory data in Cochrane reviews. TJ was a member of an independent data monitoring committee for a Sanofi Pasteur clinical trial on an influenza vaccine. TJ is a co-signatory of the Nordic Cochrane Centre Complaint to the European Medicines Agency (EMA) over maladministration at the EMA in relation to the investigation of alleged harms of HPV vaccines and consequent complaints to the European Ombudsman. TJ is co-holder of a John and Laura Arnold Foundation grant for development of a RIAT support centre (2017-2020) and Jean Monnet Network Grant, 2017-2020 for the Jean Monnet Health Law and Policy Network.

CH has received expenses and fees for his media work including BBC Inside Health. He holds grant funding from the NIHR, the NIHR School of Primary Care Research, the Wellcome Trust, and the WHO. He has also received income from the publication of a series of toolkit books published by Blackwells. CEBM jointly runs the EvidenceLive conference with The BMJ and the Preventing Overdiagnosis conference with some international partners, which are based on a non-profit model.

(Visited 1,633 times, 1 visits today)