It’s all the same

I am regularly faced with questions comparing two management approaches, and sometimes struggle to work out if the data which supports them shows that one thing is better, one thing is maybe not better, but not worse, that the two things are the same, or that we can’t really tell what the differences might be. Technically, I am looking to see if one way is superior, non-inferior, equivalent, or if the data is just too scarce to tell.

It can be helpful to try to work out what you really need to know, when addressing this sort of dilemma. If you need to know if strategy A is definitely* better than B, then you’ll be wanting a confidence interval (CI) of the risk ratio, of whatever outcome you’re considering, that doesn’t include 1.

If you need know anything else, you need to define what is a clinically meaningful difference. It’s challenging to decide this, but say you conclude that a severity score that differed by 2 points was important, then this is your clinically meaningful difference. For equivalence of treatments, you want the difference to not reach 2 points between the therapies. For non-inferiority, you want to make sure the 95% CI new treatment is not going to give you a severity score doesn’t include “-2”.  If the confidence intervals stretch wider, especially if they cross the line of no effect, then you’re data are just too few to get an answer. (Or “underpowered” if you want to sound posh.)

Armed with these definitions, you can decide when you compare things if you will be satisfied with superiority, non-inferiority or equivalence. Or you can do, as I sometimes end up doing, whatever we usually do.

* definite 95% of the time

(Visited 156 times, 1 visits today)