In my clinical role, it’s fairly easy to take the blame for most bad things that happen to my patients. I give them cytotoxic chemotherapy (for good reason, honest) and it’s a group of substances that we label with TERATOGENIC! HARMFUL! QUITE BAD FOR YOU! tags a lot of the time.
But how do we know, in most circumstances, if the drug/potion/puffer etc is the cause of something averse?
The basic tenets of appraisal for such an study are about assembling an appropriate group, and making fair assessments of outcomes – much as every other type of study.
You’d like to see that
- the exposed & unexposed groups were broadly comparable
- the outcomes and confounding variables were measured the same way
- the follow-up was long enough to have seen the outcome happen
These try to see if there’s something else in the patient/group that is the ’cause’ of the poor outcomes, that it’s not just that one group got extra-hard looking-at, and that we waited long enough to see if it was bad or not.
There are some additional, common-sense type questions that need to be asked too:
- did exposure-to-harm-causing-agent really happen before the harm was caused?
- is there a dose-response gradient? (mostly this is found)
- is there challenge-rechallenge data? (if it happens three times, it’s pretty much that it did happen)
- is the association consistent across studies? (that’s consistent – not identical – or is this a one-off finding in one study?)
- does the association make biological sense? (bearing in mind that biological sense is often rewritten when we see things that didn’t make sense until we explained them …)
Now the idea way of getting comparable groups is randomisation, but if the events are rare or temporally late from giving the drug, it’s not going to catch them. It might be that large cohorts, or case-control studies are needed instead. For exceptionally rare events, then collections of cases (without formal controls) may be sufficient to show the harms caused.
‘Blaming’ is always difficult, but these appraisal pointers may be of some use in unpicking the strength of evidence behind the attributions.