Crystal balls

Crystal BallIt’s a great sport of journalists and commentators to look back at predictions of the future from decades past, and see just how badly they have gone astray. We do this as clinicians too, but with a sense of guilt … looking back to an unexpected relapse of a low-risk tumour, or a fulminant hepatitis that presented with mild nausea, and ask ‘Why didn’t we predict that?”.

Prognostic studies are important to us, and our patients. When we see a study describing the outcome of a condition, we should be asking the ‘RAMbo’ questions of them: “Is this collection Representative of the cases we actually see?”, “Did they collect all of them – or is their some degree of Ascertainment bias?”, “How did they Measure outcomes – are they important? Are they Blinded? Are they Objective?”. But this is just the start of the questions we need to ask.

We should also be asking – “Can it all be chance?” If there are 20 variables, and one shows a predictive ability, is it just luck? (As a rule of thumb, you should be wary of any study that has fewer than ten times the number of events than variables – so a study which has 20 deaths can convincingly examine only two variables.) We should ask “Does this study add anything?”. For example, a new cancer marker may predict, with 95% accuracy, those patients who are at a high risk of relapse. But if that marker only really identified patients with metastatic disease – and you knew that from your scans – then why do the test? And we should always ask “Will this information help anyone?” – the patient, the family or yourselves?

Overall, though, even with high quality prognostic studies, we need to remember more than all that even a 99% accurate prediction in wrong once in every hundred, and that it’s true that it’s sometimes the disease, not us, that gets it wrong.

(Visited 261 times, 1 visits today)