Confident in predicting? Meta analysis models step two.

So, in a previous post I made a foray into the dangerous world of statistical models of meta-analysis.

Now, I’ll try hard to explain why we need to start doubting random effects meta-analysis more than we often have done.

To recap – fixed effects means that there is one truth that is unaltered between all settings and times and groups of patients. Random effects implies the truth varies across all or any of these fields; which means that we can only get at the ‘average’ effectiveness and only guess at good it will be in our own setting.

Each meta-analysis gives you a summary result, and confidence interval. In the case of a fixed effect analysis, this is the best guess of how good it is, and the confidence interval gives you a fair idea of where the truth really does lie.  With a random effects result, it’s similar, but the confidence interval tells you where the ‘average’ of the true effects is likely to be found. The effect in different settings may be even more extreme than this. What we’d like to know is what the variation in real effects might be … and this is very occasionally reported … it is the ‘prediction interval’.

The prediction interval looks a lot like a confidence interval; it is the boundary where, given the information which we have from the review, we are 95% sure the true effectiveness will be in different situations. It captures just how uncertain we really are about the truth in varied groups. If it’s not been reported, you can calculate it, but you’ll need to take a pencil, a sharp intake of breath/coffee/alcohol, and a look at the very readable paper by Riley & friends here.

(If you can’t manage that, then on the whole, if you take half the confidence interval and extend by that value, you’ll not be far off.)

Now you should feel braver and more confident in looking a meta-analysis in the eye and asking ‘But is it really really as accurate as all that?’

(Visited 153 times, 2 visits today)