1 Jun, 10 | by Bob Phillips
It’s worth taking some time to return to basics every now and again, and one thing that continues to befuddle medics the world over is the issue of ‘statistical significance’.
Take your average trial, say 3% saline vs. placebo for bronchiolitis admissions. You can take the proportion of kids who end up in hospital in each group, and then ask the questions “is this difference just a chance finding, or not?” You – or someone else – then does a statistical test. The test then gives you an answer, which tells you is how likely the result you found is to be due to chance: the p-value.
So what? If p = 0.05, does this mean that 3% saline works? Well, it means that the results, if coming from unbiased samples, are likely to be this or even more different in only 1 in 20 cases. (If the p=0.01, then this would be by fluke in only 1 in 100 cases.) You’ll note the p-value never tells you something works, or not, it just tells you about chance. By convention, we have decided that if it’s “true” if p<0.05. But this is a convenience and a fascinating set of debates surround this value.
If you think that’s good enough, then take the leap of faith, and prescribe. If you’re looking for certainty, then get a different job. If instead of just saying ‘Does it work?’ you want to know ‘How well does it work?’ then you don’t want statistical significance – you want an estimate of effect: more about that in another blog.
Acknowledgment: Fisher photo from Zephyrus
ps – Monthly emails explaining statistics from here