A while back, we looked at propensity scores as a way of adjusting / controlling for confounders in non-randomised designs. Another approach is the hypothesis-driven estimation of an ‘instrumental variable’: a measurable feature which causes* an outcome to occur through the agency of another.
Uh?
In the olden days (before smartphones, WiFi and email addresses that didn’t take the form X547ht.i@ibm.com ) folk studied by going to libraries. People who studied more used to pass the memory tests we used to evaluate competence (exams). We could imagine working out if studying more -> better exam passing by measuring ‘time in library’ – the ‘instrumental variable’.
And in clinical practice? Well, take the example of inhaled tobramycin use and FEV1 decline in patients with CF. Enormous observational datasets exist, but it’s known that a) sicker folk may get more tob and b) centres tend to have their own ‘culture’ of tob-giving.
So, you can measure the effect of tob-delivery by first working out how (relatively) likely someone is to have been given tob within their centre and adjusted for their poorlyness, then look at the correlation between this relatively-likleyness and the outcome of interest (FEV1 decline).
When reading such a thing, what you need to assess is 1. Is the ‘IV’ reasonable, 2. Have other confounders probably been accounted for and 3. Does it fit with other variations on this theme (propensity or ‘straight’ multivariable regression modelling).
I admit it’s not for that faint hearted, but it’s probably worse to introduce structural equation modelling and directed acyclic graphs instead.
– Archi
* causes … ok, so it could actually just be very strongly linked rather than directly causal, but this blog is complex enough …