When “More Data” Feels Safe but Increases Risk: A Boardroom Paradox. By Vsevolod Shabad

Analysing cyber governance across the NHS, a recurring pattern emerges. A warning is raised — perhaps a signal about supplier fragility, a shift in cyber threat patterns, or early indicators of workforce burnout. The risk is not yet a full incident, but the signal is clear enough to create unease.

The immediate response from the Board is usually reasonable, diligent, and entirely predictable: “Can we get a deep dive on this? Can we bring a validated data set to the next Quality Committee?”

On the surface, this is good governance. It aligns with the “Well-led” framework; it demonstrates evidence-based decision-making. But in the context of rapidly moving threats, this reasonable request often masks a dangerous failure.

The Illusion of Defendability

We operate in a system that penalises premature action and rewards assurance. For a Board member, commissioning a report is a safe act. It demonstrates activity, it creates an audit trail, and it defers the difficult choice until “certainty” arrives.

Research on risk literacy suggests that organisations often confuse risk (which can be calculated) with uncertainty (which must be navigated).

When Boards demand statistical significance for a non-linear threat — like a ransomware precursor or a sudden surge in A&E pressure — they are often not seeking clarity. They are seeking defendability. The system unconsciously prioritises the safety of the decision-making process over the safety of the organisation.

Analysis as a Pause Button

Consider a cyber security warning. The NCSC regularly highlights that the time between initial compromise and impact is shrinking. Yet, when a Board asks for “more data” on a softening control, that request inadvertently acts as a pause button.

By the time the data is “robust” enough to satisfy a traditional audit committee, the risk has often already crystallised. The warning becomes a history lesson. In this gap between the signal and the evidence, governance is most severely tested. We frame inaction as “prudence,” when in reality, we are simply waiting for the comfort of certainty before acting.

The Clinical Paradox

This dynamic is particularly ironic given that many Board members are clinicians. In their clinical practice, the concept of triage is intuitive. A consultant in the Emergency Department does not wait for a blood culture to fully grow before treating a patient with signs of fulminant sepsis. They act on the signal because the cost of waiting for certainty is death.

Yet, when those same principles are applied to organisational risk, the instinct shifts. In the boardroom, the system often requires the equivalent of a biopsy result before it feels authorised to apply a bandage.

Governing Without Safety Nets

Leadership under uncertainty means accepting that some decisions must be made before the evidence is complete. To move from “assurance-seeking” to “risk-navigating,” Boards need a shift in mindset:

  1. Reverse the Burden of Proof: When a decision is made to wait for more data, it should be subject to the same rigorous risk assessment as the decision to act. Ask specifically: Does the safety gained by waiting for this report outweigh the exposure of the delay?
  2. Define Triggers, Not Just Targets: Instead of waiting for proof of harm (lagging indicators), governance should focus on the threshold of unease that triggers a protective response (leading indicators).
  3. Accept the Ambiguity: Effective governance involves protecting executives who act on soft signals. If the system implicitly penalises “false alarms,” it ensures that the next warning will only be heard when it is too late.

The challenge for Non-Executive Directors is not always to demand more certainty. It is to have the courage to govern without it. If a Board waits for 100% of the data to make a decision, it is likely no longer managing a risk; it is managing an incident.

Author

Photo of Vsevolod Shabad.

Vsevolod Shabad

Vsevolod is a Fellow of the BCS and a researcher affiliated with the University of Liverpool. He specialises in the behavioural dynamics of security governance and decision-making under uncertainty in safety-critical sectors.

Declaration of Interests

The author declares no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Generative AI and AI-Assisted Technologies in the Writing Process

During the preparation of this work, the author used Claude (Anthropic) to improve readability and language quality as a non-native English speaker. After using this tool, the author reviewed and edited the content as needed and takes full responsibility for the publication’s content.

(Visited 105 times, 1 visits today)