Positive about predictions

In a previous post I muttered about how unhelpful sensitivity and specificity are to practicing clinicians, and how what we really want to know are the predictive values of a test.

Remembering the Table

 

 

 

Really diseased Really not diseased
Test +ve A B 1.. A/(A+B)
Test -ve C D 2.. C/(C+D)

5.. D/(C+D)

3… A/(A+C) 4… D/(B+D)

Now I like to think about % with disease. I want to know about 1.. the predictive value of a positive test (the proportion of patients with a positive test who have the disease) aka positive predictive value (PPV). I also like to know 2.. the predictive value of a negative test (the proportion of patients with a negative test who have the disease).

Others will instead want to know the ‘NPV’ (5..) the proportion of patients with a negative test who don’t have the disease.

Me – I don’t think about having a 95% PPV and 98% NPV as my test results, I like to know that 95% patients with a positive test have the disease. But I don’t think “98% of patients with a negative test really are disease free”, but “2% of patients with a negative test actually have this condition”. (It’s this type of focusing on the failures that make me a cheery chap to spend an evening with.)

If the test you’re evaluating is being used in the same sort of population, with the same sort of prevalence of disease as you are working in, you can stop here and believe these numbers. If there’s a difference – more or less disease, more or less suspicion – then you need to go an learn about Bayes and likelihood ratios and Fagin, pickpockets and child exploitation.

  • Archi

NOTE: not all of the elements of the final sentence are true.

(Visited 129 times, 1 visits today)