But if it’s significant it must be true?

One thing that I keep coming across, from a huge range of folks involved in clinical practice, is the idea that if something is statistically significant, then it’s true. Some folk nuance that a bit, and say things like “true 95% of the time” for confidence intervals or p=0.05 …

Of course, there’s an ongoing argument about exactly how to understand p-values in common, understandable language. A simplish and we hope right-enough version can be found here. But underlying that is a different, more important truth.

The stats tests work to assess what might be the product of chance variation. When the data they are testing come from hugely biased studies, with enormous flaws, and the poor little stats machine says p=0.001, the researcher and reader may conclude “this is true””. This is wrong: it is due to bias and poor research.

It may be better to think “this is unlikely to be due to chance” – in remembering that phrase, you’ll hopefully recollect the other reasons why something may not be due to chance too.

 

  • Archi

(Visited 171 times, 1 visits today)