You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Addressing gaps in evidence

9 Apr, 15 | by BMJ

?????????????????????????????????????????????????????????????????????????Evidence-based medicine (EBM) is an approach that—in addition to clinical experience and patient preferences—takes into account existing research evidence to draw conclusions on the best approach for the care of individual patients. It is a key tool for clinical decision making as the need to balance research, new tests and treatments, and available resources with clinical experience and patient requirements continues to be an important focus in healthcare.

The sheer (and ever-growing) amount of trial information highlights the problems clinicians face in identifying the best course of action for and with their patients.

Clinicians would need to read all day, every day, and still not be able to keep up with all of the latest trials and studies in their field. This is where systematic reviews and meta-analyses can be useful, summarising the overall efficacy of treatments and trial quality based upon a range of sources. Despite this evidence overload, large gaps in the available clinical evidence remain unexplored, leaving areas with only a few or low-quality studies vulnerable to variation in best practice.

The ability to combine EBM with individual patient needs is a skill required of clinicians, yet is a fairly new area to be taught as standard. Understanding the value and flaws of study methodology, summarising relevant studies and applying EBM with compassion to individual patient cases makes it an often overwhelming task. Critics of EBM have suggested that its inception is more a cost measure than patient care tool, but understanding the best approach using a combination of clinical evidence, experience and patient perspective is the only way to achieve best practice on a consistent basis.

Common pitfalls of EBM
The usefulness to any source of information is equal to its relevance, multiplied by its validity, divided by the work required to extract the information – Slawson et al.

Whilst systematic reviews offer much benefit in bringing evidence together, they can be time consuming to produce and publish, may therefore lag behind industry developments, and require updating when new seminal trial results are available. In addition, there is large variability in the distribution of research activities: while some areas of healthcare, such as expensive new drug treatments in developed countries and secondary care settings, are covered regularly with large trials and systematic reviews, other areas are often neglected. This puts the spotlight on where gaps in clinical evidence can be found, and may leave treatment pathways open to interpretation – resulting in a “healthcare lottery” for the patient depending on their location, clinician and services available.

The availability of services are just as relevant in research-poor as well as regularly studied areas of care. While making evidence-based recommendations may be more straight forward in the latter scenario, if certain interventions are not available, clinicians are required to make decisions based upon the best evidence they can source for the alternatives. In resource-poor settings, following evidence-based clinical practice guidelines may thus become impossible.

Beside the availability of recommended interventions, clinicians may have a wide range of reasons for not adhering to evidence-based guidance in the context of direct patient care, not all of them justified. The frequency of these deviations, their variance and underlying reasons are currently not widely recorded in a clinical setting. This makes it difficult to assess the size of the problem and measure the impact of any interventions, aiming to change the outlook on the benefits from evidence-based practice and making it an aspect of regular practice.

Understanding gaps in evidence
Although systematic reviews often provide information on what is not known as well as what is known, there are currently no accepted standards for displaying this information, making it difficult for users such as researchers, research funders or guideline developers to efficiently harvest gaps and research recommendations. One useful approach is to apply efficacy categorisations to interventions and so to indicate where the evidence base is insufficient. However, many evidence gaps relate to very specific comparisons or clinical outcomes within an otherwise well-studied area. Innovative ways to surface and display such information would support clinical decision making and help to focus research activity on areas of greatest need and maximum impact.

In some areas, it is unethical or otherwise impossible to conduct comprehensive, methodologically sound randomised trials. Instead, observational studies, experience and anecdotal evidence will need to guide clinical decisions.

Gaps in evidence are often seen in areas with difficulty in replicating randomised double blind studies: surgery is a very good example. There is a distinct lack of clinical evidence on surgical outcomes, because the often urgent nature of surgical procedures requires a personal decision from the surgeon based upon preference and experience, rather than the consideration of reviews. It was recently found that personal preference was the biggest influencer when it came to surgical outcomes, rather than the consideration of clinical evidence – as the perceived understanding of randomised trials was that they were not useful in determining treatment efficacy. Even when randomised surgical trials are conducted, they are often discontinued or fail to be published.

Other knowledge gaps exist for between-drug comparisons. Pharmaceutical companies tend to commission research to demonstrate that their new drug has a large benefit over placebo to use in their approval submissions, but what actually matters to clinicians would be trials that evaluate whether the new drug is better and safer than the current standard of care.

Individual patient needs versus clinical evidence
Patients with chronic conditions present with varying levels of disease severity, numbers of comorbid conditions and stages of disease progression. This makes it very difficult to design affordable yet inclusive trials, even though these are urgently needed to inform evidence-based care for these complex patients.

Mental health treatments in particular are often under-researched even though they may be widely used in routine clinical practice. Many psychiatric conditions rely on self-reporting by the patient in comparison to medical diseases, which are often easier to find replicated in a broad population for studies. The individual needs of patients with mental health conditions are essential to understand, but oftentimes patient consideration regarding treatment options is harder to obtain. This creates a minefield for clinicians, who need to accommodate their patients’ individual needs while using their clinical knowledge and relying on anecdotal experience to develop clear treatment options.

Other areas that may be underrepresented in research include fluctuating and rare conditions. The very nature of chronic fluctuating conditions means that patients’ lifestyle has a strong impact on their experience of the condition, and each individual resorts to different means to cope with their illness. Conducting trials and systematic reviews is problematic, because of the diversity in patient experience and the practical difficulty of not knowing when symptoms may occur. For rare conditions, it is difficult to carry out sufficiently powered studies to reliably find an effect.

Informing primary research is essential to develop the largest base of clinical evidence possible, in order to progress treatment considerations for these conditions in the long-term.

Identifying and addressing gaps in clinical evidence
Identifying and addressing gaps in clinical evidence requires clear channels of communication and enthusiastic, dedicated and continuous collaboration between clinicians. Individual clinicians need to contribute their own knowledge and experience to the clinical evidence base, working together to highlight where information is missing. It’s not enough to simply identify such gaps, however – it’s also important to communicate these to the healthcare organisations responsible for approving and funding clinical studies and trials. Online databases are important to help standardise the way clinical evidence is stored and displayed, providing single point of access to existing knowledge and highlighting where more research is needed.

Tools such as BMJ Clinical Evidence help to encourage conversation between clinicians, guideline developers and policy makers – highlighting areas lacking in high-quality trials. It’s crucial that clinical evidence is produced in a reproducible and transparent manner, and is easily accessible to those who need it to inform treatment decisions. BMJ Clinical Evidence allows clinicians to access a broad knowledge base of systematic overviews, to help them weigh up alternative interventions. During its regular topic updates, included studies are revisited in light of new information to either confirm or change conclusions. With over 3,300 interventions covered, BMJ Clinical Evidence helps inform clinicians and provides training and teaching materials on how to apply EBM effectively to clinical practice, in order to establish it as a standard approach to best practice across all disciplines. To find out more, visit the BMJ Clinical Evidence website.

(Visited 3,186 times, 1 visits today)
By submitting your comment you agree to adhere to these terms and conditions

Leave a Reply

You can follow any responses to this entry through the RSS 2.0 feed.
BMJ Clinical Evidence Blog homepage

BMJ Clinical Evidence

Clinical Evidence is a database of systematic overviews on the effectiveness of key interventions, together with tools and resources to learn and practise EBM. Visit site

Creative Comms logo

BMJ Clinical Evidence latest news

BMJ Clinical Evidence latest news