On the longest (and probably hottest) day of 2017, Evidence Live kicked off at the Blavatnik School of Government, University of Oxford. Professor Carl Heneghan, new Editor-in-Chief of BMJ Evidence-Based Medicine (EBM), opened the conference with the EBM manifesto which recognizes problems in research evidence and proposes actions needed to tackle these problems. This is my summary on what I learned about fixing the “E” in EBM during Evidence Live.
What is wrong with evidence? The scandal of bad medical research
In his 1994 paper, “The scandal of poor medical research”, Doug Altman argued we needed “less research, better research, and research done for the right reasons.” Despite some progress (e.g. CONSORT statement), medical research continues to suffer from bad conduct and reporting. For example,
- Ben Goldacre (University of Oxford): Checking five major medical journals (COMPare Trials) and ClinicalTrials.gov (Trials Tracker) showed outcome switching is common in clinical trials and many pharmaceutical companies, government agencies, and universities do not report all trial results.
- Mary Dixon-Woods (University of Cambridge): “Verschlimmbesserung: an attempted improvement that made things worse than they already were.” Although this is inevitable in quality improvement projects, methods for evaluating quality improvement interventions often lack scientific rigour to overcome the underlying bias favouring improvement.
- Jong-Wook Ban*: A mixed methods study showed cardiovascular clinical prediction rules are often developed without providing clear justification by citing existing rule, although most authors agree citing existing rule is important.
- Jamilla Hussain (Hull York Medical School): A systematic review of clinical trials in palliative care showed information about missing data was often incompletely reported and mishandling of missing data was common.
- Tone Westergren (Oslo University Hospital): An analysis of randomized clinical trials included in a systematic review showed few reported adverse effects as recommended in the CONSORT harms extension.
What solutions were proposed?
Many innovative ideas addressing these problems were exchanged at Evidence Live 2017. Some of the proposed solutions are outlined below:
- Expanding the role of patients in research: Simon Denegri (NIHR, INVOLVE) discussed how involving patients and the public in design and delivery of research might improve relevance and efficiency. Amy Price (University of Oxford) described how to report patient and public involvement in research.
- Increasing the systematic use of existing evidence: Jon Brassey (TRIP) proposed a community rapid review system where users conduct and update rapid reviews with support from community and technology.
- Producing better usable clinical guidelines: Eve O’Toole (National Cancer Control Programme) shared methods for developing clinical guidelines using a rapid review process that limits the number of questions by focusing on areas with new evidence, variation in practice, and potential for clinical impact.
- Making research evidence relevant and accessible to end users: Clinicians often fail to use research evidence because they lack skills to find, appraise, and apply relevant evidence in practice. Caroline Blaine (BMJ Knowledge Centre) showed how this challenge may be addressed by using point-of-care tools such as BMJ Best Practice. Additionally, shared decision making may be facilitated by using decision aids in anticoagulation care (Peter Oettgen, DynaMed Plus) and for women with heavy menstrual bleeding (Rachel Thompson, Dartmouth Institute).
- Reducing questionable research practices: Guidelines aimed to improve reporting were proposed for surgical case reports (SCARE statement), surgical case series (PROCESS statement), and adaptive clinical trials (ACE project).
- Ensuring drug regulation is transparent and independent: To maintain public trust in medicine, Fergal O’Regan (the European Ombudsman) stressed that keeping transparency in the drug approval process, including pre-authorization and post-authorization evaluation, is essential.
- Using real world data to support innovation, quality improvement, and safety: Routinely collected data can be used to complement the results of randomized controlled trials (Lars Hemkens, University Hospital Basel), develop prediction models (Katriina Heikkila, London School of Hygiene and Tropical Medicine), prioritize research agendas (Fay Chinnery, Wessex Institute), monitor evidence-based practice (Jamie Falk, University of Manitoba) (Kelsey Chalmers, University of Sydney), and assess burden of disease (Gloria Ansa, University of Ghana).
- Educating professionals and the public in EBM to make an informed choice: Sharon Mickan (Griffith University) showed small group education improved confidence levels and positive behaviours in the use of evidence-based practice among clinicians. Matt Oxman (Norwegian Institute of Public Health) presented a cluster randomized controlled trial that showed providing the Informed Health Choices resources lead to better assessment of claims about treatment effects by primary school children.
So, what would you do to help fix the “E” in EBM? Let’s bring your ideas, experiences, and challenges back to Evidence Live 2018. Until then…
* Jong-Wook Ban, MD, MSc
I am a General Internist in Olympia, WA and a DPhil candidate in Evidence-Based Health Care with interests in research inefficiencies in cardiovascular prediction rule development.