This review paper is a significant step in a long and bumpy journey. The idea started four years ago, when I began a new post as research fellow in patient and public involvement (PPI) impact assessment, funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre. For several years, the NIHR has mandated researchers to involve patients or members of the public in research as partners or advisers. I was tasked with measuring the effect of PPI, which had barely been attempted before—in part as there is disagreement about the rationale for PPI and its framing as a methodological intervention, and in part because the complexity and diversity of PPI make this a hugely challenging endeavour. I was, however, fortunate to be supported by a committed advisory group of patients, PPI practitioners, and academic experts.
We took the consequentialist rationale for PPI—that it improves research—and aimed to measure its effect on one important element of research quality and efficiency: recruitment and retention of participants in clinical trials. These outcomes are readily measurable and frequently the Achilles’ heel of trials worldwide. Poor recruitment of participants is worryingly common and arguably unethical—a recent study of the National Library of Medicine clinical trial registry revealed that 48 027 patients had enrolled in trials closed in 2011 that were unable to answer the primary research question meaningfully owing to insufficient participant numbers.
Once participants have been recruited, their retention in the trial (described as “Cinderella” to the noisy ugly sister of trial recruitment) is equally important. Poor recruitment and retention lead to wasted (often taxpayers’) money and delayed patient access to new treatments or slower abandonment of fruitless treatments. As one of our patient partners put it: “a trial that recruits more quickly will ultimately benefit patients more quickly.” It is in the interests of us all to ensure clinical trials successfully recruit and retain participants, and PPI has been identified as a possible solution to some of these difficulties.
We hoped that this review would shed light on the extent to which PPI does (or does not) affect recruitment and retention rates, and on factors which might influence this. The first of many challenges we faced was defining and identifying PPI, as there is no universal definition and it is poorly reported in the literature. We started with the INVOLVE definition, but found it wasn’t specific enough; we needed to flesh it out and agree important details. In the end, we opted for a relatively broad definition, carrying out subgroup analyses to distinguish between different types of PPI.
Halfway through the review journey, my son was born and I took several months of maternity leave. Although a happy time for me personally, it was a spanner in the works for the review, which then desperately needed updating. I was fortunate to have access to the University of Oxford Returning Carers Fund, which supports researchers who have taken a break for caring responsibilities to re-establish their careers. This funded crucial research assistance, enabling us not only to update the review, but also to raise its quality by ensuring independent double screening and data extraction by a second reviewer. This paper is therefore also a testament to the importance of support schemes for researchers taking parental leave.
We discovered that while not all PPI interventions significantly improved recruitment, on average they did. This effect seemed to be magnified when people with lived experience of the condition under study were involved. Some of the authors thought that these findings merely proved what was already obvious, while others were genuinely surprised. I was disappointed that we weren’t able to draw conclusions about the effect of PPI on retention because of the dearth of studies evaluating this, but at least it highlighted a gap for future research.
Whatever your beliefs about PPI, our findings do indicate one likely and important benefit to clinical trials. They are limited by the particulars of the PPI interventions included (some were not pure PPI; none was introduced early enough to influence the research question or whole trial design). And they tell us nothing about when, why, and how PPI has this effect—which we are exploring in a follow-on realist analysis of the included studies. However, I believe that they are a welcome addition to an otherwise largely qualitative PPI evidence base, which seems to carry less weight in the biomedical research world. Our findings have already informed a related study that aims to develop a PPI intervention to enhance recruitment and retention in surgical trials. I hope they will be useful to others planning and designing clinical trials too: if you hadn’t thought about involving patients before, or were sceptical about the benefits, perhaps it’s time to think again?
Joanna Crocker is a research fellow at the Nuffield department of primary care health sciences.
Competing interests: Please see full disclosure on research paper