StatsMiniBlog: Calibration vs Discrimination

20140205-091454.jpgThere are a variety of clinical prediction rules in the world. If you’ve seen one – they always used to have a nomogram attached – it would take the answers to a few questions and come up with a ‘probability of bad thing happening’.

As we’ve mentioned previously, there’s an issue with deriving models and assuming they will work as well in every other place you try them (and have a read here and here if you want a fuller explanation from the E&P section). A new model needs to be ‘validated’ – that is, checked to see if it works.

Two bits could be looked at:

a) Calibration – does the ‘predicted probability’ from the model match(~ish) the observed probability in the new set

b) Discrimination – if you set a threshold (say, less than 10% chance = low risk), does the model have the same sort of sensitivity and specificity as the original data?

How close you need these to be to claim effective validation is, as many things, a matter of clinical judgement but it’s worth understanding what they are validating to make a sensible assessment.

– Archi

(Visited 1,542 times, 1 visits today)