Well, I have to start with an apology. In one of these columns, I foolishly claimed that the difference between a Peto OR fixed effect meta-analysis and a DerSimonian-Laird random effects meta-analysis was pointlessly academic. It’s not.

Now, this might start getting all statistical, but there is a clear and important difference. Meta-analysis comes in two main flavours: fixed and random. It’s clinically important to understand what these things mean. Any other bits that are added, for example Peto, DerSimonian-Laird, or Inverse-variance, are ways of describing exactly how the weighting of each study within the meta-analysis is undertaken and shouldn’t worry us too much.

Now, ‘fixed’ effects takes as an underlying truth that each of the studies in the meta-analysis gives us a glimpse of a single truthful ‘effect size’, and that any variation between them is through chance alone. Sometimes the results seem too mixed up – heterogenous – for this to be true. In this setting, we could consider using ‘random’ effects.

‘Random’ effects infers that the studies actually have different ‘true’ effects, and that all we can do it take an ‘average’ of the effect. This may be because it works differently in different populations (for example, hypertensives in black and caucasian people), or because there are alternative dosing schedules which have different effects. It’s often said that a random effects approach should only be used after all approaches to explain the heterogeneity have been attempted, perhaps by taking clinically sensible subgroups or meta-regression.

The reviewers & meta-analysts should make the decision based not primarily on the results of the meta-analysis, but on an understanding of the studies which make up their review. If this doesn’t seem the case, then you can do it instead: look at the studies, decide if you think they can reasonably give a single true effect, and take a fixed effects approach. If they can’t, take the random effects model and add a pinch of salt…