In June 2024, a ransomware attack on Synnovis — the pathology firm jointly owned by Guy’s and St Thomas’ and King’s College Hospital foundation trusts — left blood testing services across south east London inoperable for months. More than a thousand operations were cancelled. London’s blood stocks were depleted to the point where a national appeal had to be launched. By June 2025, King’s College Hospital had confirmed that one patient died in part because of a delayed blood test result caused by the attack.
The attack would have been prevented by multi-factor authentication — the same basic security measure familiar to most people from online banking.
Both trusts operated within established NHS cybersecurity frameworks. Synnovis had policies. The organisation was not ungoverned. A single behavioural gap — the absence of MFA on a critical clinical system — persisted long enough for attackers to exploit it, and did so in a governance environment that offered insufficient connection between that behaviour and the patients it put at risk.
The problem is structural, not attitudinal
The standard account of security failures in healthcare treats them as knowledge or incentive problems. Staff either do not understand the risks or do not feel sufficient accountability for them. The remedy is more training, more assurance, more compliance frameworks.
This account is partly right and fundamentally incomplete. It fails to explain why intelligent, motivated clinical professionals who would never cut corners on infection control routinely share login credentials to avoid delays, ignore security prompts, or treat MFA as an obstacle to workflow. These are not acts of bad faith. They are rational responses to a structural reality: cybersecurity requirements are experienced as external impositions on clinical work, not as integral to it.
Viktor Frankl argued that sustainable human behaviour — the kind that persists under pressure, without constant external enforcement — requires meaning: a perceived connection between one’s actions and outcomes that genuinely matter. Where that connection is absent, compliance is performative. It holds when the auditor is present. It erodes under operational pressure.
The hand hygiene parallel
Clinical governance has faced this problem before.
For decades after Ignaz Semmelweis demonstrated, in 1847, that handwashing dramatically reduced post-operative mortality, adoption remained inconsistent and contested. The evidence was available. The behaviour was not. Sustained, comprehensive change came only after germ theory provided a coherent explanatory framework — connecting handwashing to infection, infection to patient harm.
Clinical staff did not resist hand hygiene because they were careless or uninformed. They resisted because the required behaviour was not yet connected, in an experientially meaningful way, to the care outcomes they were trained to prioritise. Once that connection was made, the behaviour became durable.
NHS cybersecurity has not yet made that connection at scale. The Synnovis attack illustrates what persists when it has not: a known, preventable control gap, ungoverned in a way that made its clinical consequences invisible until they became irreversible.
How governance architecture creates the gap
Most NHS organisations govern cybersecurity through information governance structures — separate from clinical governance, separate from patient safety committees, and separate from the forums where care quality is measured and discussed.
The separation is administratively logical. Information governance has a distinct regulatory remit and distinct legal obligations. But the structural consequence is a meaning gap. A clinical professional completing a mandatory cybersecurity e-learning module has no mechanism to connect that requirement to a patient outcome. The module exists in an information governance world. Their professional identity exists in a clinical world. The connection is never made — and so it is rarely made at all.
The Synnovis attack illuminates the consequence precisely. MFA was not enabled on a critical pathology system. In the information governance world, this was a configuration oversight. In the clinical governance world — had cybersecurity lived there — it would have been a patient safety risk, visible to the clinical leads responsible for pathology continuity. It was not visible there, because that is not where cybersecurity sat.
What boards can do differently
Three governance adjustments — none requiring new investment or wholesale restructuring — can begin to close this gap.
Make cybersecurity a clinical governance standing item. A cyber incident that disrupts pathology services, cancels surgery, or depletes blood stocks is a patient safety event. Governing it only through information governance committees severs the link between security and care. Clinical governance committees should receive regular cybersecurity intelligence framed around clinical impact: which systems are most vulnerable, what disruption would mean for patient access and continuity of care, and where current security behaviour is weakest.
Reframe security metrics as care continuity metrics. Boards receiving compliance-focused cybersecurity reports — training completion rates, DSPT scores, penetration test summaries — are receiving information relevant to auditors, not governors. Boards receiving security intelligence framed around the clinical systems most exposed to disruption, and the care pathways that depend on them, are better placed to make governance decisions and to understand why those decisions matter.
Include cybersecurity scenarios in clinical training. Clinical induction, mandatory updates, and simulation exercises rehearse fire safety, infection control, and medication errors. They rarely rehearse what to do when a clinical system becomes unavailable, or how safe care is maintained when digital tools fail. The scenario itself is not the investment — the signal it sends is. Cybersecurity, rehearsed alongside infection control, stops being an IT responsibility and starts being a clinical competence.
The governance question boards should ask
The useful diagnostic is not whether the cybersecurity policy is current. It is whether clinical staff in this organisation would describe secure behaviour as part of delivering good care — or as something that gets in the way of it.
In most organisations, the answer to that question is already known. The information governance structures that currently own cybersecurity were not designed to create clinical meaning. Clinical governance structures were. Closing the gap between them is not additional complexity. It is the correction of an architectural separation that was never intended to persist.
Author
Vsevolod Shabad

Vsevolod is a Fellow of the BCS and a researcher affiliated with the University of Liverpool. He specialises in the behavioural dynamics of security governance and decision-making under uncertainty in safety-critical sectors. The views expressed are those of the author in a personal capacity. Advisory enquiries via vshabad@vshabad.com.
Declaration of Interests
The author declares no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Declaration of AI Use
During the preparation of this work, the author used Claude (Anthropic) to improve readability and language quality as a non-native English speaker. After using this tool, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.
Funding Statement
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.