Basics. Size vs. bias

There’s a beautifully clear explanation behind the BMJ-EBM-journal paywall of a concept I’ve been struggling to express for some time, which is partly there in GRADE and partly grounded in common sense.

Take the parachute argument — do you really need an RCT for parachutes (as there are survivors of non-‘chuted  falls) — and reductio ad absurdium leaps to ‘so all EBM is bunk’. As discussed earlier, EBM is not all RCT so the particular strawman here fires brightly away from anything meaningful, but is does illuminate a problem. What about situations where non-RCT evidence is good enough?

The ‘mothers kiss‘ for nostrilly based crayons is a good example. It works; so why do an RCT? Well, it’s not just that – it works, and it’s unlikely to cause harm, and it’s a situation where the crayon’s not coming out on its own – so why do an RCT?

To frame it alternatively, “what biases would have to be present in these observational studies, and how large would these biases have to be, in order to invalidate the result?” If the answer to this is SO large you wouldn’t believe it was possible, then you don’t need an RCT. The smaller the proposed effect size, the greater the need for randomised trial data. As a rule of thumb, if the effect is a relative risk of >5 (or <0.2 aka < 1/5th)  then you’ll be happy with good observational data. The closer it gets to >2 / <0.5, the more and more possible a biased explanation of the result is.

To shorthand – appraise your evidence (i.e. assess the size of the threats to validity), evaluate the importance of the effect, and ask how closely it fits your PICO – don’t go all SR = 1a and CaseStudy = 4 on us.

– Archi

(Visited 288 times, 1 visits today)