We’ve approached EBM by thinking about it as a framework for thinking, not a checklist to tick though.
But when it comes to applying evidence to answer a question – how much is enough?
This might refer to ‘how much benefit do I need to produce to make the use of New$$Drug a worthwhile thing’? But for this post, we’ll leave cost-effectiveness and it’s pals to one side. Here we are asking:
“How much evidence should turn me from doing A into doing B?”
Now I really do wish that there was a simple answer to this. But there isn’t.
There is something about personality and consistency and group-feeling. If you’ve not already done so, have a think about diffusion of innovation, early adopters and the like. There is something about our practice being subject to inertia; and the larger, the more massive the thing you wish to change the more difficult it is (and the more evidence you hope you would need). And there is something about how trustworthy the evidence is. This is a combination of features of strong validity, homogeneity of study results and size of effect.
All in all, there’s not a simple formula to make the ‘application’ step of EBM work mechanically. (Which is like the other steps too.)
What we do have is a lovely series of articles in the E&P journal which are addressing indirectly the question of how to make improvements happen, and these should be based on evidence, expertise and patient values. Check out the series via it’s introduction here and accompanying podcast.
(There’s a really nice view on this type of thing here as well.)
EDIT and as and example, this:
In the mixed economy of evidence utilisation, the earlybird rapid review catches the policy worm.
— Trisha Greenhalgh (@trishgreenhalgh) January 29, 2014