You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Six proposals for EBM’s future

27 Mar, 15 | by BMJ Clinical Evidence

by Paul Glasziou

Paul Glasziou


Gordon Guyatt coined the term ‘Evidence-based Medicine’ (EBM) over 20 years ago, and it has had a remarkable global influence. But EBM is not a static set of concepts, set in stone tablets in the 1990’s; it is a young and evolving discipline. The fundamental concept of EBM – using the best available research evidence to aid clinical care – may have changed little, but what is best and how to apply the concepts in practice continue to develop. The 3rd ISEHC conference in Taiwan, November 2014, marked another step in the evolution of evidence-based health care. On the opening plenary, I suggested 6 areas where EBM’s future attention was needed.

1. Don’t skip “step 0”, but foster doubt, uncertainty and honesty
The “traditional” steps of EBM we teach students are: Ask, Acquire, Appraise and Apply. However, Dr Ian Scott – a physician in Brisbane – has suggested the most important step precedes these: recognizing our uncertainties. Without this “Step 0”, we cannot begin the other steps. Beginners often ask detailed convoluted questions. But with experience of uncertainty, we ask more basic questions about our everyday tests and treatments, and about the advice and information we are deluged by. However, we currently understand little of this step of recognizing our basic uncertainties. At McMaster, Sackett often exposed disagreement about clinical signs to raise the uncertainty about what is “correct”. Others simply reward students for saying “I don’t know”, instead of treating ignorance as an admission of failure. Both are excellent ideas, but, compared with the other steps of EBM, we have few ideas and almost no research on how best to do “step 0”. We need to do much more!
2. Beware overdiagnosis: our definitions are as important as our tests
For much of the brief history of EBM, we have taken diagnostic definitions for granted, using them as a starting point to study prognosis or treatment. However, definitions of disease often evolve over time: either incidentally – through improved technology such as spiral CT scans for pulmonary embolism – or through deliberate changes, such as the lowering of thresholds for diseases like diabetes, hypertension or osteoporosis.
“Overdiagnosis” has been low on the EBM radar, but has grown to be one of the largest problems facing medicine. As one example, take the 3-fold growth in the incidence of thyroid cancer in the USA, Australia and other countries. Is that due to radiation or diet? Probably neither; more probably it is an epidemic of diagnosis, not an epidemic of cancer. Thyroid cancer mortality has remained unchanged. Even more dramatic is the 15-fold increase in thyroid cancer in South Korea, which arose from the ease with which it was added to national screening programs. Though less dramatic, most cancers have seen substantial rises in incidence which appear to be overdetection rather than true increases. Many other diseases have seen changes in definitions, with most expanding. A recent analysis of guidelines identified 14 changed disease definitions, of which 10 widened and only 1 narrowed the definition.
This Overdiagnosis causes problems with interpretation of our evidence about the prognosis and treatment of diseases, as the spectrum has been changed and sometimes dramatically. However, Overdiagnosis is such a threat to the sustainability of medicine, that it is a worthy EBM topic in its own right.
3. It is the patient’s decision: practise and teach Shared Decision Making alongside EBM
EBM has always expressed sympathy with the ideas of shared decision making (SDM). For example the Sackett textbook definition is: “Evidence-based medicine is the integration of best research evidence with clinical expertise and patient values”. But the step of shared decision making gets much less attention in our EBM textbooks and teaching than searching skills or critical appraisal. We need to be much more explicit about the “how to” and teach, as part of the steps of EBM, both generic shared decision making (“options talk” and “decision talk”) and the use of decision aids. A small step is to incorporate SDM into tutorials on critical appraisal. For example, after doing a critical appraisal, I often end with students doing a role play of explaining its meaning: one plays doctor, one plays patient; then we have feedback using “Pendleton rules” – from the “doctor”, then the “patient”, then everyone else; we then swap roles and go again. A similar process could also be done to practice the use of a decision aid. Taking SDM more seriously is not only a good thing in itself, but would also help overcome the common misconception of EBM as a rigid discipline which is not patient-centred.
4. Take non-drug interventions as seriously as pharmaceuticals
If there was a drug which reduced hospital re-admissions for patients with chronic airways disease by 70%; or cut invasive melanoma rates by 50%; or prevented 50% of malaria cases; or prevented 50% of breech births, we would clamour for access. But there are non-drug treatments that offer these benefits, yet are neglected: exercise (“pulmonary rehabilitation”), daily sunscreen, insecticide-impregnated bed nets, and external cephalic version (turning the baby via the mother’s abdominal wall). We neglect them partly because they are not available in a single collection, equivalent to a pharmacopoeia. To avoid this availability bias, those working in EBM need to put more effort into non-drug interventions than drug interventions to redress the existing imbalance. The Royal Australian College of General Practitioners has piloted a Handbook of Non-Drug Interventions but a global effort is needed to extend this to other disciplines and countries.
5. Build clinical practice “laboratories” to study translation and uptake
Courses in EBM usually spend most time on the theory and skills, but very little – or none – on how to integrate these skills into bedside care. Furthermore, the clinical practice of EBM tends to go unrecorded, remaining out of public view and discussion, which limits the exchange and evolution of methods. We need to better record, evaluate and teach the different ways of “doing” EBM in the clinical setting. In a series of interviews at the CEBM in Oxford, I talked with a dozen leading EBM practitioners in different clinical disciplines. They had very different ways of going about EBM in paediatric oncology, perinatal medicine, surgery, emergency medicine, and general practice. Of course, there are necessary differences, but we may also learn and adapt by finding out the processes of others. We need to treat the methods for the efficient and effective bedside practice of EBM as seriously as we treat the methods for doing a systematic review. To do this, we will need “EBM laboratories” where we can readily observe, record, and analyse the process of using evidence in practice.
6. Invest long-term in automating evidence synthesis
The costs of gene sequencing have dropped dramatically in the last decade: more than 50% per year. This dramatic drop in cost was not chance, but a serious investment in doing sequencing faster, better, and cheaper. By contrast, the costs of evidence synthesis have been increasing as we have increased the rigour of the process. That cost is inhibiting the use and uptake of evidence in practice, with our information landscape littered with out-of-date systematic reviews. We need to dramatically speed up the processes through standardising, streamlining, and – most importantly – automating many of the dozen or so steps in doing a systematic review or other evidence synthesis. It will take time and resources to achieve this – maybe reducing the time by 50% per year – but we need to ignore some of the specific review alligators and start draining the process swamp. Without this automation, we will fall further behind with reviews and updates. And that will mean such reviews are seen as less and less relevant to practice.
If I had my time again, I would have started on these sooner. But as a wise ecologist once said: the best time to plant a tree is 50 years ago, the second best time is today.

Paul Glasziou is Professor of Evidence-Based Medicine at Bond University, and chairs the International Society for Evidence-Based Health Care. His research focuses on improving the clinical impact of research. As a general practitioner, this work has particularly focused on the applicability and usability of published trials and systematic reviews.

By submitting your comment you agree to adhere to these terms and conditions
  • Fascinating,Paul, that you should help us to celebrate those three liberating words “I don’t know”. During my dermatology learning as a young trainee, I was always grilled. The response “I don’t know” was a dead end and perceived as failure. I even remember when running my first clinical dermatology meeting as a new consultant, some of my colleagues accusing the trainees of “cheating” for looking up important things before we had a case discussion.
    Things are starting to change now. We have always started our Evidence-Based Dermatology course (now in its 21st year) by a group “I don’t know” in unison, which is fun and releases the
    potential for learning in a safe environment. At the end of the course, as our trainees start to show fear in anticipation of returning to old-style grilling education, I teach them to say “it is not known, but let’s go and find out”.

    Aye, three very important words, Paul. And as with all words, it is the way you say them that counts.

  • Dear Paul,
    thank you very much for your contribution. We can learn a lot.

    But still there is a big problem which has seldom been mentioned in all discussions regarding EBM: the lack of research into how doctors integrate the three fundamental pillars of EBM, the evidence, the preferences of the patient, and the skills and experience of the doctor. I admit that the concept of SDM is an important step forwards. However, we don’t know how doctors integrate their experiential knowledge -or perhaps have to integrate- and, much worse, we don’t even know
    how we can teach young doctors to improve this integration process.

    In my view this is a serious problem that when not solved could be a danger for the future of EBM. It might be the biggest hurdle in implementing EBM, at least in general practice.

    I think that important EBM-experts like you should put this topic on the top of the EBM research agenda.

    wth kind regards,

    Erik Stolper, GP, PhD
    Maastricht University, the Netherlands.
    University of Antwerp, Belgium

  • Jo van Schalkwyk

    Paul’s first five points are desirable and necessary but not sufficient; his sixth carries a sting.

    As a practising clinician I’d wholeheartedly support the idea that we need to embrace error (in modern business terminology “Fail early, fail often”) and remove the stigma attached to not knowing, and to an imprecise diagnosis. We have often paid lip service to patient empowerment without providing the information and insight that allows the patient to make the optimal choice. We have focused on technical wizardry—the machine that goes ping—and excluded other effective therapies. We have failed to study how good clinicians do good medicine. But even if we fix these problems, there’s still a fundamental lesion that we need to excise from the heart of EBM. Our current decision-making fails to integrate all of the relevant information meaningfully.

    This integration is really difficult, because the tool used by most people (including many competent statisticians) is not up to the task. This tool is conventional ‘frequentist’ statistics. As conceived by Ronald Fisher it provides absolutely no mechanism for integrating in prior information, and in its usual interpretation explicitly denies use of this information.

    A researcher may be led to design a trial around a premise that is incoherent and in denial of the corpus of basic science (I immediately think of a prospective randomised controlled trial of homeopathy that doesn’t take into account the prior “unreason” of assuming that water has a memory that can be encouraged by gently and repeatedly banging it with a stick. A P-value or similar measure that sets the bar at 0.05 simply doesn’t acknowledge the healthy skepticism that Paul desires in his first point. As Carl Sagan pointed out “Extraordinary claims require extraordinary evidence”).

    A further example is the semi-empiric ranking of evidence according to “quality” e.g. “Class A evidence”. Although visually appealing, this has no basis in mathematics. A clear counter-example is how we don’t have prospective, randomised, (double-blind?) controlled data that cigarette smoking causes cancer in man, yet no sane scientist would question the association. (Ronald Fisher himself, an inveterate pipe smoker, did not however accept that tobacco causes lung cancer). We often seem to be paralyzed into incoherence through “lack of good enough evidence”, so we doddle along making decisions based on prejudice rather than adequately using the evidence we do have!

    There are two ways that the frequentist defect can be fixed. The first is the subjective Bayesian approach, but the problem here is that different people will nominate different priors, and proponents of this approach (as Andrew Gelman has pointed out) often fail to test their models, a clear source of error.

    More robust is the “Objective Bayesian” approach. With very simple assumptions (cf. Cox’s theorem; ET Jaynes has explored this well) this can be shown to be uniquely optimal. Where we simply cannot determine priors, we can use ‘uninformative’ priors.

    But even here there’s a problem, which brings us to Paul’s sixth point. As we acquire vast treasure troves of data, we seem to be more and more inclined to believe that the data will “reveal the truth”.

    This belief is in fundamental denial of what good Science is about. Francis Bacon believed that Science is an incremental approximation towards a knowable truth, but, as Karl Popper pointed out, this doesn’t work well. All of our statements of fact are predicated on premises that may or may not hold. As has been repeatedly shown in the past century (the non-Euclidean basis for General Relativity; quantum tunnelling effects; or more mundane discoveries like how HIV causes AIDS or Helicobacter causes peptic ulcer disease) our initial, fundamental assumptions are often just wrong.

    All of our statements of fact are theory laden, and we need to acknowledge this. Even the data we acquire are determined by where our attention is focused, and the opportunities we have created for data acquisition. The nature of the measurements taken depends on the theories that led to the design and calibration of our measurement instruments. It’s likely that most of this design is robust, but we still need to continually question our most cherished theories (As Konrad Lorenz once said, “I discard a pet theory before breakfast every day. It keeps me young”).

    We can’t really do this adequately without a robust basis for testing our theories, and skeptical examination of these theories _in context_.

    Paul’s sixth point—an automated approach–will mislead us again and again, especially if we simply trust the machine. Machine intelligence is not yet up to the task.

    Automated data analysis provides neither the necessary skepticism nor application of this skepticism to our cherished theories. This is particularly the case if the analysis is based on frequentist premises. Paul’s first and sixth points are in fundamental conflict.

    The data themselves are mute.

    My 2c. Jo.

  • rameshpmenon

    the point 0 as highlighted is the beginning of uncertainty. We face that so very often in “developing” countries or resource poor settings. Most of the “Evidence” in EBM is based on the “so-called” developed world which more often than not excludes ambiguous situations of resource-poor settings where it is most needed.

You can follow any responses to this entry through the RSS 2.0 feed.
BMJ Clinical Evidence Blog homepage

BMJ Clinical Evidence

Clinical Evidence is a database of systematic overviews on the effectiveness of key interventions, together with tools and resources to learn and practise EBM. Visit site

Creative Comms logo

BMJ Clinical Evidence latest news

BMJ Clinical Evidence latest news