StatsMiniBlog: Type I and II errors

StatsMiniBlogAfter reading the title, most people now feel vaguely nauseous. If you throw in alpha and beta, or worse α and β, then there’s a distinctly bilious taste.

Don’t get sick, though. Take a deep breath and fall back on what you already know:

Type I error = calling a difference ‘real’ when it isn’t. This is what we look at with p-values and set, conventionally, at 0.05 == 5%

Type II error = calling something the same when there really is a difference. This is ‘power’, and is conventionally 80% (ie 20% of the time we’ll accept a result that says “no difference” when in truth, there is).

We can actually assess both elements when we look at a confidence interval. If it’s a 95% CI, then we see if it crosses the line of no effect (0 for absolutes, 1 for ratios). And if it does, we can assess if the study gives us a fair guess it’s too uncertain to tell us much, or if it’s close enough to tell us there is no difference – as we discussed previously.

Stats isn’t really hard. Just if you put anything in Greek, it seems δύσκολος

– Archi

(Visited 127 times, 1 visits today)