There’s an issue with EBM. It’s that it relies on the best available evidence. So what if the best evidence is deliberately obscured, hidden behind a paywall, or subject to the precursor to publication bias (“can’t be arsed” bias, where the folk performing an investigation don’t have the motivation to write up, present, and submit their work for publication).
How should we cope with this?
One approach takes a ‘Bayesian’ view; that we all begin our question with some preconceived notion of how likely the answer is to be true. We then have the probability that the results of the study are true; we combine these together and out pops our final assessment of if it’s true or not.
In diagnostics this is all clear: the pre-test probability varies by situation (for example, if someone calls with a K+ of 6.5 on the baby whose heels you squeeeeezed the blood from, you may respond differently than if the call related to the child with the blocked CAPD catheter looking rather peaky on the renal ward).
In therapeutics we have the 95% CI and the p-value; but these come from a position of ‘equipoise’. We need to readujst them in light of where we really start. Now the maths of this is possible and very well worked out, but its also pretty hard to do and most of us really don’t have the time. In ballpark figures a p=0.05 is equivalent to an LR of about 20 — run that up your nomogram from where you start and that should give you a better idea of what — if unbiased — you should be believing about the result.
(For an example, if you only have a 5% belief in the efficacy of probiotics, and an unbiased trial shows they work for something at p=0.05, then you’ll theoretically be shifted to a 50% belief in yoghurt … Note this relies on the evidence itself being ‘unbiased’ ..)
– Archi