6 Nov, 09 | by John Offen
It is well recognised that many students and qualified nurses alike struggle to get their heads around statistics. Confidence intervals are essential to understanding nursing research, but can instil feelings of blind panic in the uninitiated. Like so many technical concepts they are intimidating when you don’t understand them, but not so difficult once you do. So what does it mean when a study reports that those receiving a treatment are twice as likely to be cured as those in a control group who do not receive the treatment CI95% 1.7 – 2.2 ?. Nursing studies take place in the real world, and the way individuals respond to a particular treatment or the precise way it is applied will vary. Researchers try to minimise the differences, but these so called ‘sampling errors’ are still inevitable. In a study including few participants this could result in a confusing spread of results. The treatment might work for individual A, but not for individual B. So what is the true effect of the treatment? When large numbers are recruited to a study, we might begin to expect that the average treatment effect represents this true effect. In fact the larger the number of participants, the more certainty we can have about the accuracy of the result. So how accurate does a study need to be before we are prepared to claim that the results found are valid. Most researchers consider a finding to be statistically valid if they can be 95% certain of the result. In practice this means that, rather than quoting a single figure for the effect of a treatment, the authors of studies specify an interval within which they are 95% certain that the true figure lies. In this case they claim that they are 95% certain that the true figure lies somewhere between 1.7 and 2.2. As the lower estimate of effectiveness of 1.7 is still well above 1 (if it were 1 the treatment would be of no benefit; if it were below 1 it would be harmful) we can say that we are pretty confident that the treatment is effective.
So I think I understand confidence intervals, and I am 95% confident that my own explanation above is more or less what it is all about. But perhaps one of you statistically minded people can explain to me what is so special about the 95% CI that has made it so standard across nursing studies. This de-facto acceptance of an apparently arbitrary figure troubles me. The community seems to be happy that an odds ration of 1.1 – 1.4 95%CI demonstrates a result that favours treatment, whereas the same study could show say 0.9 – 1.5 97%CI and it would presumably be rejected on the basis that it includes values of 1 and below. So why is 95% so pivotal and should we feel happy to jump out of an aeroplane 95% sure that the parachute will open?