For those who hold to personalised medicine as the future of therapeutics, the case is made that it’s about identifying the ones who will gain from the ones who don’t. Separating the signal from the background noise. If we did a randomised trial of ‘antibiotics for fever’ we may come away believing they had no value; the benefit in significant bacterial infections swamped by the viruses, immune responses and dodgy thermometer readings that made up the noise. With this in mind, it makes sense to search within a potential heterogenous group of patients to identify those who do gain some benefit from an intervention if its is – broadly – ineffective.
How we do this poses a problem though.
The issue of data dredging, p-value fitting, and ‘overprediction’ within the field of clinical decision rules and the like we have spoken of before. The same issues apply here; if we hunt for a gene signature that predicts response, we might end up with a red herring. Roughly speaking, you need 16 times the sample size to examine an interaction (drug-gene) than you do for a simple trial outcome. That’s way way way too many patients to reliably do that in most studies.
Post-hoc analysis can potentially let us in on the fine-tuning of treatment. But remember the maybes and howevers of good statistical common sense before you decide on a herring-directed therapy.
- Archi