Many outcomes give no answer?

Some systematic reviews are confusing. Sometimes this is just poor writing style. Sometimes it’s because the techniques are difficult to grasp (meta-analytic item-response analysis, anyone?) And occasionally it’s because the data don’t seem to add up ‘right’.

Take, for example, the systematic review of an imaginary therapy which shows that when given to children with an asthma exacerbation it improves their clinical severity scores yet doesn’t change their peak flow recordings.  How’s that work?

One explanation might be ‘selective outcome reporting bias’. I find it easiest to think of this as a sort of within-study version of publication bias. (To recap, publication bias is the finding that studies with more dramatic outcomes tend to have a greater chance of being published, and be published more quickly, than those showing no or negative effects.) What happens in some instances is that certain favourable outcomes (e.g. clinical severity scores) are detailed in those papers which have ‘positive’ results, whereas other outcomes (e.g. PEFR) are reported in all studies, regardless of their outcome.  It can be best assessed by comparing the protocol, detailing which data are to be collected, with the output of the final paper. The description of the potential for such problems is now embedded in Cochrane reviews, but in other studies it is worth examining for when the results don’t all add up.

(Visited 118 times, 1 visits today)