We’re great fans in the Archimedes blog of trying to get people to think about the meanings and impacts of research, like asking What would Jack want and not believing p-values. One key idea is that of an ‘important clinical difference‘ (see – avoided significantly …) that is essential in working out if a trial is telling you two treatments really are equivalent, or if the study is just underpowered.
If you’re designing a trial you’ll be wanting to be very very sure that this difference, that you’re gong to base all your study numbers upon is, made upon the best possible grounds. Aren’t you.
Which is why a recent systematic review of methods to determine this value – the DELTA group review – is really interesting reading. They identified papers which used variants of seven approaches, six of which address the idea of an ‘important’ difference:
- ‘anchoring’ an important difference (patient or expert opinion)
- distributional differences (looking for changes that are greater than you might find by chance in a group taken at random from the population)
- health economic approaches (a difference that would make a cost-effective change)
- ‘better than d’ (assuming that if it makes a standard effect size change of more than 0.2 it’s a small, meaningful difference)
with two also opening to find ‘realistic differences as well …
- formal opinion-seeking (via survey +/- delphi type things – explicitly requests what might be realistic too)
- evidence-based review (looking at what other trials have found to be realistic and regarded as important +/- put into practice)
and one which just looks at a ‘realistic’ difference
- pilot study (demonstrating the differences that you might well find but not commenting on importance)
‘Important’ differences are as we discuss – what matters to patients. ‘Realistic’ differences are also vital – but they are process-vital – they say what we are likely to realistically find. Ideally both elements would be incorporated, but very few of the published papers they examined explicitly demonstrated how they did this.
Which of these is ‘right’ is tricky to say. In trial design, a realistic and important difference seems essential. In taking trial findings forward, it would be great to see how the researchers had found out why the difference they are seeking is meaningful. It’s all good food for thought though.