Blog entry written on: Characteristics, quality and volume of the first 5 months of the COVID-19 evidence synthesis infodemic: a meta-research study (bmjebm-2021-111710.R1).
Authors: Rebecca Abbott, Alison Bethel, Morwenna Rogers, Rebecca Whear, Noreen Orr, Liz Shaw, Ken Stein, Jo Thompson Coon
Since the emergence of the coronavirus disease 2019 (COVID-19) in December 2019 in Wuhan, China, there has been a proliferation of research relating to its epidemiology, diagnosis, treatment, prevention and impact. Making sense of research by bringing together studies in systematic reviews, with or without meta-analysis, is a well-established method in medicine and health research; systematic reviews are often the go-to resource for decision makers. However, poorly conducted systematic reviews can lead to inaccurate representations of the evidence and misleading conclusions with important implications for health care provision. Being able to rapidly access up-to-date and reliable information has never been more important than during the past 18 months.
As the pandemic began to unfold, our regular work was interrupted on a daily basis with requests for help on COVID-related evidence reviews. Each time we conducted a search to help inform these requests we would get bogged down with the sheer volume of research available. We could also see an ever increasing number of systematic review protocols being registered on similar and overlapping topics and some of those that were published seemed to be prioritising speed over rigour.
From this curiosity sprang a project exploring the ‘Characteristics, quality and volume of the first five months of the COVID-19 evidence synthesis infodemic: a meta-research study’. The results of the study bore out our fears: in the first few months of the pandemic, in the rush to get evidence, there was a significant output of low quality systematic reviews missing cornerstones of best practice. Of more concern, perhaps, was the attention these reviews, received from other researchers, policy makers and the media.
By being reported as ‘systematic reviews’, many readers may regard evidence syntheses as high-quality evidence, irrespective of the actual methods undertaken. The challenge especially in times such as this pandemic is to provide indications of trustworthiness in evidence that is available in ‘real time’. Researchers, peer reviewers and journal editors need to ensure that robust methods have been used for research denoted as systematic reviews. For health professionals seeking answers from systematic reviews, we encourage you to use your appraisal skills and not skip straight to the conclusions: read the methods and assess whether the review has used best practice, and if it hasn’t, has it at least acknowledged this and highlighted the limitations. Caveat lector!
Dr Rebecca Abbott
Evidence Synthesis Team, NIHR Applied Research Collaboration South West Peninsula (PenARC), University of Exeter, EX1 2LU.
Conflict of interest: None declared.
The views and opinions expressed on this site are solely those of the original authors. They do not necessarily represent the views of the BMJ and should not be used to replace medical advice. All information on this blog is for general information, is not peer-reviewed, requires checking with original sources and should not be used to make any decisions about healthcare. No responsibility for its accuracy and correctness is assumed by us, and we disclaim all liability and responsibility arising from any reliance placed on such commentary or content by any user or visitor to the Website, or by anyone who may be informed of any of its content. Any reliance you place on the material posted on this site is therefore strictly at your own risk.