Sharing Patient Data for Research – a Matter Of Trust

By Rob Lawrence.

Through a unique deliberative process involving public participation, we arrive at some conclusions which initially I found surprising (even counter-intuitive) about how best to foster trust in a large organisation such as the NHS, especially where use of patient data plays a key role in research.

Using formal guidelines, ethics committees and other bodies aim to engender public trust by establishing best practices to safeguard patients’ interests and privacy. Strict regulation and oversight are typically demanded. Intuitively these may seem the best approach. But are they really?

That’s what we hoped to unravel because, despite all the efforts of well-intentioned committees and practitioners, trust overall may be declining, in which case there could be serious implications for the NHS. For example, if enough patients start withholding permission to use their electronic patient records (EPRs) the quality of research using EPRs is diminished. Similarly, in the current context, the public may be reluctant to use smartphone ‘AI’ apps for virus contact tracing if they lack confidence in the security of their data. Widely reported security breaches haven’t helped maintain trust.

The development of this paper involved eleven members of the public who responded to an advertisement in the local press. They met on two successive weekends for iterative group discussions and breakout groups, led by two academic ethicists. Presentations from Oxford NHS Trust and Oxford University subject matter experts proved useful in providing background information before the group discussions.

Oneill’s challenge

Onora O’Neill’s 2002 Reith Lectures called into question the received wisdom on fostering trust in institutions, arguing that the widely accepted approach is misguided because it is based on misunderstandings of what ‘trust’ means. So, existing strategies, usually geared towards reducing uncertainty through regulation and control, may not always be appropriate (or may be desirable but not sufficient) and may even have the unintended consequence of reducing trust. Institutions need to demonstrate trustworthiness through relevant values and commitments as well as appropriate regulatory procedures.

Trust in Organisations – what does it mean?

The problem of fostering trust in organisations can be better understood by clarifying the concepts of trust and trustworthiness in the way O’Neill suggests – by focussing on trustworthiness.

In essence, trust requires a leap of faith in the person or organisation trusted. It entails risk and involves relinquishing some control. It tends to be confused with reliance which seeks to reduce risk by following procedures intended to ‘guarantee’ outcomes. For example, we trust clinicians (experts in their field) to act in our best interests.  But we depend (rely) on service providers such as banks or the Royal Mail to fulfil obligations without really trusting them. Reliance is based on a history of consistent service. Requiring ‘transparency, accountability and communication’ in fact might mean we’re not prepared to trust. By contrast, trust is based on the values and commitments of a trustworthy individual or organisation.

Implications for the NHS

Evidently any organisation that is perceived as trustworthy is more likely to be trusted. The NHS is complex and geographically distributed with many disparate yet inter-dependent parts and it would be surprising if a one-size-fits-all approach worked. For example, we rely on (rather than trust) routine functions such as NHS IT systems and data security by designing accountable processes and standards to increase control in a transparent way and so reduce risk.

Following O’Neill’s suggestion, applying the same kinds of control measures may be counterproductive when it comes to trusting, say, clinicians or medical researchers. NHS trustworthiness depends on reliability of routine functions such as IT or data security plus the judgement, experience and values of clinicians and researchers, working with uncertainty to serve patients’ best interests. The Caldicott Guardian is one example of how an evidently trustworthy professional, following a clear and transparent process, can ensure the NHS respects confidentiality of patient data used for research purposes by using their judgement on a case by case basis, ‘acting on our behalf’.  This not only speaks to the need for institutional trustworthiness but is, arguably, more effective and efficient.

We think that existing frameworks designed to protect patient data could be further improved by taking into account the differences between trust and reliance, tailoring them to emphasise the need for guarantees or trust depending on the NHS context. However, no matter how trustworthy, not everyone will feel able to trust the NHS and policies should reflect this.

Commercial third parties play an increasing role in NHS operations (from basic services, through to technical support to research using NHS patient data). We think that given the inherent potential conflicts between commercial interests and health care, reliable (rather than trustworthy) mechanisms are called for to guarantee the best interests of patients are served – especially when it comes to how their data are used.

 

Paper title: Trust, trustworthiness, and sharing patient data for research

Authors: Mark Sheehan [1], Phoebe Friesen [2], Adrian Balmer [3], Corina Cheeks [3], Sara Davidson [3], James Devereux [3], Douglas Findlay [3], Katharine Keats-Rohan [3], Rob Lawrence [3], Kamran Shafiq [4]

Affiliations:

1. Ethox Centre, University of Oxford, Oxford,

2. UK Biomedical Ethics Unit, Social Studies of Medicine, McGill University, Montreal, Quebec, Canada

3. Oxford, UK

4. London, UK

Competing interests: None declared

(Visited 553 times, 1 visits today)