Mike Clarke: Assessing the impact of participating in research – the need for core outcomes?

cometThe COMET Initiative is making it easier for people to develop, identify, and use core outcome sets to improve the potential impact of research findings on healthcare practice, health, and wellbeing. But what about the challenge of assessing the potential impact of being part of a piece of research on health and wellbeing? Is there a role for core outcome sets in investigating the so called “trial effect,” to see if people who take part in research, or are treated by practitioners or institutions that conduct research, do better than other patients? The answer is a resounding yes, and is highlighted by a recently reported study of lung cancer care in the UK, [1] and a systematic review of earlier research [2]. Taken together, these provide further evidence that it is not just reviews of randomised trials of the effects of healthcare interventions that would benefit from core outcome sets. Reviews of methodology would also be strengthened if it was easier to compare, contrast, and combine the findings of separate studies.

The opportunity for considering this arose through serendipity at the second meeting of the COMET Initiative in July, and a conversation with Mick Peake about the audit of patients treated for lung cancer in English hospitals with different levels of interest in research, which has now been published. This research into the use of chemotherapy in NHS Trusts in the first decade of the twenty first century found that patients seen at a hospital with a keen trial interest were more likely to receive chemotherapy, and that chemotherapy was associated with improved survival [1].

However, combining this new study with earlier research is difficult. The systematic review that was published earlier this year found eight studies that had compared patients treated in an institution that was involved in research with those treated in institutions doing little or no research [2]. It also included five studies where the comparison was between practitioners (rather than institutions) with different levels of research activity. The results within either type of study could not be pooled because of heterogeneity, both in the design of the studies and, importantly, in their outcome measures.

As with the lung cancer study, the available findings from existing research studies suggest that there might be a “trial effect” of better outcomes, greater adherence to guidelines, and more use of evidence by practitioners and institutions that take part in trials; but, the consequences for patient health remain uncertain and meta-analyses are not possible. However, if there was a core outcome set for such studies which might include generic measures in all such studies, including the use of an intervention supported by evidence, the resolution of the underlying condition or adherence to guidelines; it would have been possible to conduct meta-analyses, explore statistical heterogeneity, and plug in the findings of the new lung cancer study.

Until then, we will continue to rely on “plan B” for systematic reviews, with the individual studies being brought together in a single document but without the ability to do anything with their results other than describe each study separately in a narrative way. This means that we remain in a position where the most robust conclusion is that there is no evidence that patients treated by practitioners or in institutions that take part in trials do worse than those treated elsewhere; when we would much prefer to answer the question: “Is there a difference, and if there is, how big is it?”

Michael J Clarke is Director of all-Ireland hub for trials methodology research centre for public health, Queen’s University Belfast, UK

References:

1. Rich AL, Tata LJ, Free CM, Stanley RA, Peake MD, Baldwin DR, Hubbard RB. How do patient and hospital features influence outcomes in small-cell lung cancer in England? British Journal of Cancer 2011; 105: 746-752. [http://www.nature.com/bjc/journal/v105/n6/full/bjc2011310a.html]

2. Clarke M, Loudon K. Effects on patients of their healthcare practitioner’s or institution’s participation in clinical trials: a systematic review. Trials 2011; 12: 16. [http://www.trialsjournal.com/content/12/1/16]

  • This is very interesting. It would be helpful if the COMET initiative could develop and publish a standard data model for clinical trials which included a systematic way of recording outcome measures. There is one already available – http://rctbank.ucsf.edu/ – and there may be others of course