Invisible prescribers: the risks of Google’s AI summaries

By Hannah van Kolfschooten and Nicole Gross

With digital technologies, your patients have a ‘doctor in their pocket’. But something new is happening when they search online for medical advice. Typing a question such as “Can I take ibuprofen with blood pressure tablets?” or “What helps against chest pain?” into Google no longer produces the familiar list of links. Instead, a confident, AI-generated box appears at the top of the page, offering what looks like an authoritative answer. Google calls this feature an AI Overview. Microsoft’s Copilot provides similar AI-generated summaries through its Edge browser.

These systems mark a shift in how people find and interpret health information online. By design, these summaries reduce click-through rates to real websites by 40–60%, replacing the process of browsing diverse sources with a single, seemingly definitive response. First launched in the United States in 2024, where it drew criticism for misleading health advice, AI Overviews are now expanding across Europe.

AI Overviews are not just another way of “Googling symptoms.” Until recently, users were presented with a variety of sources: public health agencies, hospital websites, patient forums, news outlets, and wellness blogs. Although the quality of these sources varied, the diversity itself enabled patients to cross-check information, prepare questions, and participate more actively in their care. Searching online can empower patients when they have access to multiple perspectives, allowing them to assess information critically. With AI Overviews, that step disappears.

For instance, when asked “Can I clean my teeth with coconut oil?”, Google’s AI Overview confidently provides detailed instructions for “oil pulling”, a practice unsupported by dental science (see screenshot taken on 6 November 2025). As Figure 1 shows, the answer blends wellness claims with legitimate oral hygiene advice, giving unproven methods an aura of credibility. Similarly, when asked “What does a heart attack feel like in women?”, Microsoft’s Copilot, an AI-assisted search feature in the Edge browser, lists some symptoms but omits crucial ones such as cold sweats, anxiety resembling a panic attack, or sudden shortness of breath.

Figure 1. Google’s AI Overview presenting misleading dental advice as fact

This trend is alarming. Health information online is already inconsistent: a mix of accurate resources, commercial promotion, and unverified claims. An AI system trained to “summarise” amplifies what is most visible, not necessarily what is most reliable. Once health information is summarised and displayed as an authoritative “AI answer”, the effect is not neutral. Three factors make this especially dangerous in healthcare.

First, hallucination. Generative AI systems do not understand truth; they generate text that sounds plausible. They can therefore produce confident but inaccurate statements, a phenomenon known as “hallucination”. Studies have found that AI models hallucinate in as many as 48% of responses. An AI Overview answering “Can I use antibiotics left over from a previous infection?” might combine fragments of blogs and partial quotes to suggest it is safe, unintentionally encouraging misuse and fuelling antimicrobial resistance. Since these AI answers appear at the very top of Google’s results, they can spread misinformation exceptionally fast.

Second, performativity. Words do not merely describe the world; they can change it. When Google’s AI confidently states, “Natural supplements can replace antidepressants for mild depression it does more than report information: it can persuade. For many users, such statements act as endorsements for real-world behaviour: stopping medication, delaying treatment, or advising others to do the same. In this sense, AI-generated content functions like an invisible prescriber, offering advice without oversight or context.

Third, lack of choice. The AI summaries now appear by default in several search engines. Users cannot permanently disable Google’s AI Overviews. Most people using Google for health information never chose to consult an AI system; they are simply presented with its output as part of a routine search. Where users once compared multiple sources, they now receive one synthetic answer, stripped of its reasoning and priorities. In medicine, that means the careful balance between official guidelines and individual context is replaced by an algorithm’s best guess. Google says that health searches receive extra scrutiny under its “Your Money or Your Life” policy, but this does not appear to extend to its AI Overviews.

Health professionals, patients, and policymakers cannot control Google’s algorithms, but they must anticipate their effects. Clinicians can ask patients what they have read online and guide them toward trustworthy sources. Patients should be made aware of the persuasive power of AI-generated text, especially given its tendency to hallucinate. Regulators, meanwhile, should treat AI-generated search results as a form of digital health communication, subject to standards of transparency, accuracy, and accountability. Public institutions can also partner with technology platforms to prioritise evidence-based sources and clearly flag misleading health content.

Search engines have always shaped how people think about illness, risk, and care. But with AI summaries, Big Tech has turned search engines from tools of discovery into voices of authority. Unless regulators act swiftly, the next major wave of health misinformation will not come from social media, but from the search engine itself.

Author/Affiliation:

Hannah van Kolfschooten, Lecturer-Researcher in Law & AI, University of Amsterdam

Nicole Gross, Associate Professor in Business & Society, National College of Ireland

Conflicts of Interest: None to declare

(Visited 242 times, 1 visits today)