Through their experience of living with a health condition, patients gain experiential knowledge that no one else has. Researchers working in health research typically lack this knowledge, but often don’t know what they’re missing. Researchers’ decisions can be biased, because of the gaps in their knowledge and the assumptions they make. By involving patients in their research, researchers learn from other people’s experience, which then changes their own thinking, values, choices, and actions. This leads to the commonly reported outcomes of involvement—improved research design, delivery, and dissemination—and over time, the wider impacts of a changed research culture and agenda.
These crucial moments of learning in conversations—“the light bulb moments”—ultimately lead to more relevant research. For example, through dialogue with patients, a group of researchers aiming to identify the socioeconomic impacts of carpal tunnel syndrome, realised that their proposal didn’t consider patients’ working lives. They then changed their protocol, so that their research would generate findings that were more meaningful to patients. Before they entered that conversation with patients, those researchers didn’t know what they were about to learn. It was specific to them and their project. Other researchers may not have started with the same gap in their thinking and may have learnt something different from a similar interaction. This means that for researchers, the experience of involvement is subjective and the outcome unpredictable.
In the area of patient and public involvement in research, the value of patients’ experiential knowledge, and the difference between involving patients and studying them is understood. But as yet, the same value hasn’t been ascribed to researchers’ experiential knowledge and their personal accounts of the insights they gain through involvement. All too often researchers dismiss these as anecdotal and state the need for robust evidence of impact i.e. empirical data. Measuring the impact of involvement has become the Holy Grail, but if the focus is purely on objective, observable outcomes, a lot can be missed.
There seems to be an assumption that if there was empirical evidence of the impact of patient involvement in research, then the researchers who are still sceptical about involvement would be persuaded to try it, and then they’d be hooked. The researchers who experience a light bulb moment are the ones who say they’ll never do research without involvement again. But who’s ever been persuaded to go on a rollercoaster by the physics? People are more often convinced to try new experiences by hearing about them from someone else, especially someone they know and trust. Telling a story is one of the most powerful forms of communication. If we want researchers to change what they do, might stories from their peers be more influential than an RCT?
By focusing on how patient involvement objectively improves research (e.g. through better recruitment, information sheets, etc.), there’s also a risk of creating unmet expectations. For example, a researcher who took his information sheet to a patient panel expecting them to make it clearer, instead received feedback that was entirely about his choice of research method. It’s not possible to predict which problems or issues patients might identify or fix ahead of time, only that researchers will learn from the experience of involvement. Perhaps it might be more helpful to explain to researchers what involvement will do for them e.g. how it will stimulate new ideas, challenge assumptions, identify problems and solutions, and increase their confidence and motivation?
Another assumption seems to be that empirical evidence is important to convince other stakeholders of the value of involvement. For example, funders who have invested heavily in patient involvement might reasonably ask “What difference is this making?” Collecting hundreds of different stories of learning experiences might not be practical or informative here. But focusing on what can be objectively measured might not provide the expected insights either. For example, if involvement in University A improved recruitment in 70% of its studies, while for University B, it only improved recruitment in 30% of their studies, would this mean that University B wasn’t doing involvement as well? Not necessarily. Researchers in University A might simply be worse at writing information sheets and invitation letters. More importantly, evidence of better recruitment to studies might not be proof of the anticipated value of involvement, if those studies aren’t also addressing issues that genuinely matter to patients.
Perhaps the most important question to answer is “What do we want patient involvement to achieve?” Then we can work backwards to agree how best to evaluate its success. The commonly stated goal of patient involvement is to change the research agenda, so that research findings will genuinely help patients and improve their lives. This might require empirical investigation or it might not. One of the easiest ways to do this might be to simply ask the patients for their views.
Kristina Staley is a freelance consultant supporting and promoting patient and public involvement in research. She is currently working with the James Lind Alliance, INVOLVE, and other voluntary and statutory research organisations.
Competing interests: None declared.