Alex Ruani is a doctoral researcher in diet-health misinformation at University College London and chief science educator at The Health Sciences Academy.
In a recent media interview, Google’s chief clinical officer was reported to have likened the presence of health misinformation on digital platforms to weeds in a garden, stating: “If all you did was weed things, you’d have a patch of dirt”. From this view, removing misinformation risks creating a barren information landscape.
This comparison appears to be more than a casual metaphor. It signals a growing tolerance, even tacit endorsement, of misinformation’s role in keeping digital gardens alive. While the intention may have been to defend information diversity, the framing neglects a key ethical standard embedded in healthcare: primum non nocere (first, do no harm).
Digital misinformation is not a benign weed. It has thorns. It stings. It poisons. In public health, the consequences can be irreversible.
The real-world harm of misinformation
Research over the past decade has made one thing abundantly clear: exposure to misinformation can mislead health decisions, discourage essential care, and amplify preventable harm. A popular 2021 study revealed that individuals exposed to COVID-19 vaccine misinformation were significantly less likely to intend to vaccinate. The same applies to nutrition, where false claims promoting extreme diets or unsafe supplementation as ‘cures’ continue to flourish, with herbal supplements linked to 20% of drug-induced liver injury cases and an estimated 23,000 emergency visits in the U.S. in 2015 alone.
The risks are not theoretical. Consider the 2020 spike in poison control calls following online promotion of bleach-based ‘cures’ for COVID-19, widely spread via social media platforms. Or the case of black salve – a corrosive, unregulated topical agent touted online as a ‘natural cancer remedy’ leading to skin disfigurement and delayed medical treatment. These are not weeds; they’re toxic agents.
In dietary contexts, misinformation has led some patients to forgo clinically indicated treatment. The rise of the ‘carnivore diet’ in digital manosphere subcultures, for example, has spurred individuals with underlying conditions to abandon their usual eating patterns in favour of excessive animal fat consumption, exposing themselves to preventable complications.
These choices are not made in isolation. They are shaped by information environments algorithmically curated by powerful digital gatekeepers.
Platform responsibility or ethical evasion?
The idea of leaving ‘weeds’ in the digital garden reflects a broader moral shift. One that increasingly tolerates the risk of health harm as an unavoidable by-product of digital engagement. The vicious rationale is this: if misinformation is inevitable and dominant, removing it may deplete engaging content. But public health does not operate on inevitability or surrender; it operates on intervention.
What’s troubling is not merely the acceptance of misinformation, but the quiet resignation to its unavoidability in the digital age. Framing misinformation as part of a natural ecosystem (a necessary trade-off for engagement or user retention) risks absolving platforms of their duty to protect users from preventable harm. Worse, it blurs the distinction between opinion and evidence, between harmful advice and responsible information.
The ethical implications are stark. If a medical doctor advised a patient to ingest a harmful substance under the guise of ‘free-speech balance’, they would be in breach of their professional duty. Why should digital health environments be held to a lower standard?
The unwarned and the most vulnerable
Children, older adults, and those with lower literacy are especially susceptible to misleading health narratives. A growing body of research warns that adolescents, who often assess health advice based on personal relatability rather than safety, are particularly vulnerable to unvetted and potentially harmful practices circulating online.
Similarly, online groups promoting ‘natural immunity’ over childhood vaccinations have fuelled vaccine hesitancy among parents, contributing to recent measles outbreaks. In early 2025 alone, more than 800 cases were reported in the U.S., with at least three child deaths.
We cannot afford to shrug off these harms as growing pains in the digital age. Nor should we euphemise them as the cost of ‘content diversity’.
Toward a more ethical information ecosystem
What would a more responsible approach look like? For one, greater proactivity in identifying risky content on digital platforms, with visible warnings and reduced amplification. Risky content that radically diverges from established scientific consensus or essential public health guidance should be clearly flagged, or demoted altogether.
Second, platforms must be held accountable through regulatory standards and oversight. This includes being required to conduct audited misinformation risk assessments that estimate harm potential and other informational hazards. These assessments should guide transparent risk management and user protection.
The growing tolerance for misinformation as a necessary weed of digital life signals a dangerous ethical drift. When misinformation is likened to an inevitable part of the landscape, its risks are normalised. But in public health, normalising harm is not an option.
We don’t need to leave the weeds. We need to warn about them, remove them where possible, and shield the most vulnerable from their consequences.
The principle of ‘do no harm’ must apply not only in clinical care, but across all health-influencing environments, including the digital ones.
Author
Alex Ruani is a health misinformation researcher at University College London, chief science educator at The Health Sciences Academy where she leads large-scale educational and publishing initiatives that have reached over 100,000 health professionals in 170+ countries, elected council member of the Royal Society of Medicine Food & Health Council Forum, honorary member of The True Health Initiative, and part of the World Health Organization’s (WHO) Fides network. Her work brings together digital governance, public health, and the ethical dimensions of innovation, with implications for healthcare systems, regulatory frameworks, and global health policy.
UCL Profile: https://profiles.ucl.ac.uk/63386-alex-ruani
ORCID Profile: https://orcid.org/0000-0002-8191-0166
LinkedIn: https://www.linkedin.com/in/alejandraruani/
Declaration of interests
I have read and understood the BMJ Group policy on declaration of interests and declare the following interests: none