By Paige Fitzsimmons.
In April 2022 the UK Department of Health and Social Care published ‘Better, broader, safer: using health data for research and analysis’. This report, which has come to be known as the ‘Goldacre Review’, recommends the use of Trusted Research Environments (TREs) to facilitate the building of public trust in health data sharing activities in the UK.
The TRE model is a viable method to mitigate concerns in the data sharing space and because of their utility TREs have been implemented by many of the UK’s health data initiatives including Genomics England, Our Future Health and HDR UK prior to this report. The Goldacre Review and its recommendations around TREs will have broad policy implications across the NHS and perhaps more broadly across the UK and internationally.
TREs provide approved researchers with remote access to health data via a virtual desktop. These are controlled environments from which researchers are only able to export aggregate analysis results, never individual level results, after approval from data custodians. The analogy of a reference library rather than a lending library is often drawn, where a TRE acts much like a reference library with the information, or data in the case of TREs, being accessed under supervision but never leaving that supervised environment.
One of the main objectives of the TRE is to provide increased data security and protection through increased oversight and regulation. The Goldacre Review suggests that this additional security will respond to public concerns surrounding health data misuse. Ultimately, the Review claims, this will work to build public trust by providing assurances with regards to research conduct and data access.
Calls for increased public trust in general are not uncommon, and they continue to provoke change in the ways organisations accessing and using data operate. However, given the nature of trust, we argue in our recent article ‘Trust and the Goldacre Review: Why Trusted Research Environments are Not About Trust’ that TREs are not, in fact, about trust at all.
While philosophers have not yet agreed upon a single account of trust, there are consistent elements which are fundamental across all proposed accounts. Trust is an attitude we have towards others and it is something which is placed in those we deem to be, or hope to be, trustworthy. For trust to be appropriate we must have a level of expectation about the competence and willingness of those we are trusting. These expectations, and the potential for them to be let down or betrayed, result in a level of vulnerability which is necessary for trust to exist.
We argue that the increased security measures and oversight activities offered by the TRE model work to reduce vulnerability and in doing so, reduce the need for trust in this context. Rather, we rely on TREs to function as they have stated they will, and on the regulations in place to prevent them from doing otherwise. Without vulnerability to misuse there is little need for trust in this context.
Overall, we think TREs are a good idea. We believe they offer solutions to many of the challenges raised in the complicated landscape of data sharing and, in particular, that they go some way to addressing overly burdensome governance procedures. Addressing these challenges through increased security and oversight may even result in members of the public being more comfortable sharing their data. We do, however, think that consideration must be given to the language which is used in this context.
Using ‘trust’ as a descriptor here, could result in expectation setting which goes beyond the regulatory or technical boundaries of the TRE and opens the door to feelings of betrayal should these expectations not be met. The name ‘Trusted Research Environment’ invokes feelings associated with close relationships, relationships which are built on future expectations of the trusted party to act in a way one trusts them to, and not strictly within the boundaries of the TRE.
The use of TREs may ultimately help to increase public trust, but this trust will not be directed at the research environment, but rather the whole system, of which TREs are only one piece. Increased security within organisations, and oversight of their data sharing activities, may lead people to trust the system which collects, shares and governs the use patient data for research.
Given all of this, we suggest that the way forward is by using a more suitable name for this sort of environment, such as ‘Secure Research Environment’. This proposed shift in language is not simply about semantics, as it will help to set appropriate expectations regarding the capacity of the research environment – a secure environment which reduces vulnerabilities in health data sharing activity.
Authors: Mackenzie Graham1,2, Mark Sheehan2,5, Paige Fitzsimmons2, Richard Milne3,4
1 Wellcome Centre for Ethics and Humanities, University of Oxford
2 Ethox Centre, University of Oxford
3 Engagement and Society, Wellcome Connecting Science, Cambridge, United Kingdom
4 Kavli Centre for Ethics, Science and the Public, Faculty of Education, University of Cambridge
5 Oxford NIHR Biomedical Research Centre, Oxford University Hospitals Trust
Competing interests: None
Social media accounts of post author: @Fitz_Paige