You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

But if it’s significant it must be true?

20 Sep, 16 | by Bob Phillips

One thing that I keep coming across, from a huge range of folks involved in clinical practice, is the idea that if something is statistically significant, then it’s true. Some folk nuance that a bit, and say things like “true 95% of the time” for confidence intervals or p=0.05 …

Of course, there’s an ongoing argument about exactly how to understand p-values in common, understandable language. A simplish and we hope right-enough version can be found here. But underlying that is a different, more important truth.

The stats tests work to assess what might be the product of chance variation. When the data they are testing come from hugely biased studies, with enormous flaws, and the poor little stats machine says p=0.001, the researcher and reader may conclude “this is true””. This is wrong: it is due to bias and poor research.

It may be better to think “this is unlikely to be due to chance” – in remembering that phrase, you’ll hopefully recollect the other reasons why something may not be due to chance too.

  • Archi

By submitting your comment you agree to adhere to these terms and conditions
You can follow any responses to this entry through the RSS 2.0 feed.
ADC blog homeapage

ADC Online

Education, debate, and meandering thoughts on child health, using evidence and research.Visit site

Creative Comms logo

Latest from Archives of Disease in Childhood

Latest from Archives of Disease in Childhood