{"id":4656,"date":"2025-04-08T13:54:22","date_gmt":"2025-04-08T12:54:22","guid":{"rendered":"https:\/\/blogs.bmj.com\/medical-ethics\/?p=4656"},"modified":"2025-04-08T13:54:22","modified_gmt":"2025-04-08T12:54:22","slug":"are-ai-doctors-becoming-more-transparent-than-human-ones","status":"publish","type":"post","link":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/","title":{"rendered":"Are \u2018AI doctors\u2019 becoming more transparent than human ones?"},"content":{"rendered":"<p>By Hazem Zohny<\/p>\n<p>A major worry with AI in healthcare is the &#8216;black box&#8217; problem: deep learning AIs reach conclusions without explaining <em>how<\/em>. In healthcare, where trust is essential, this is a serious problem.<\/p>\n<p>Recent AI developments challenge this worry. One example is Google\u2019s <a href=\"https:\/\/research.google\/blog\/from-diagnosis-to-treatment-advancing-amie-for-longitudinal-disease-management\/\"><em>Articulate Medical Intelligence Explorer<\/em> <\/a>(AMIE), an LLM-based system for clinical conversations and multi-visit care planning.<\/p>\n<p>AMIE generates detailed clinical notes, differential diagnoses, and care plans in natural language, complete with step-by-step reasoning, justifications, and citations linked to clinical guidelines. These outputs are designed to be readable, verifiable, and useful for clinicians, and they build on prior patient interactions to support multi-visit care planning over time.<\/p>\n<p>In some ways, this system exhibits more transparency than human clinicians. And that challenges how we think about trust in healthcare AI.<\/p>\n<h3>A new type transparency<\/h3>\n<p>There are different facets to AI transparency in healthcare, each relevant to different stakeholders\u2014patients, clinicians, regulators, and AI developers. One way to break these down is:<\/p>\n<ul>\n<li><strong>Mechanistic transparency<\/strong>: insight into how the system works under the hood\u2014its architecture, training, and internal processing. <em>(AI developers, regulators)<\/em><\/li>\n<li><strong>Data transparency<\/strong>: what data the system was trained on, how that data is used, and how it might influence outputs. <em>(Patients, clinicians, regulators, AI developers)<\/em><\/li>\n<li><strong>Organisational transparency<\/strong>: how the system is deployed\u2014who is responsible for it, how it fits into clinical workflows, and how it interacts with professional norms. <em>(Patients, clinicians, regulators)<\/em><\/li>\n<\/ul>\n<p>So-called \u2018reasoning\u2019 LLMs communicate the deliberations behind their eventual answer in what\u2019s called their chain-of-thought. They are not mechanistically transparent, but they exhibit a new type of transparency we might call:<\/p>\n<ul>\n<li><strong>Reasoning transparency<\/strong>: the ability to communicate in natural language the steps, justifications, and considerations behind a given output. [Clinicians, patients]<\/li>\n<\/ul>\n<p>One way to think about this is by analogy to human decision-making: mechanistic transparency is like insight into the brain\u2019s inner workings, while reasoning transparency is like access to the person\u2019s verbal explanation of their thought process\u2014even if that explanation is partly post-hoc.<\/p>\n<p>Reasoning transparency opens the door to a new kind of interaction with AI systems, where their conclusions can be examined, contested, or adapted\u2014much like a conversation with a human colleague.<\/p>\n<h3>AMIE\u2019s architecture<\/h3>\n<p>AMIE is an example of this. It\u2019s built around a dual architecture designed to make its \u201cdeliberations\u201d visible and accompanied by justifications. It includes:<\/p>\n<ul>\n<li>A <strong>Dialogue Agent<\/strong> that interacts with patients, gathering information and building rapport.<\/li>\n<li>A <strong>Management Reasoning Agent<\/strong> that carries out the clinical reasoning, analyzing the patient\u2019s situation and recommending next steps.<\/li>\n<\/ul>\n<p>What sets AMIE apart are two reasoning transparency mechanisms:<\/p>\n<p><strong>First, it shows its reasoning process.<\/strong><br \/>\nAMIE &#8220;thinks out loud.&#8221; It outlines its step-by-step analysis: concerns, factors weighed, and options considered. This reasoning trace is visible to clinicians, and potentially to patients, so they can follow how the system arrives at its conclusions.<\/p>\n<p><strong>Second, it justifies its recommendations with reference to clinical guidelines.<\/strong><br \/>\nEach conclusion it reaches (ordering tests, making diagnoses etc.) is rooted in justifications that reference clinical sources, like BMJ Best Practice or NICE guidelines. For instance, <a href=\"https:\/\/arxiv.org\/abs\/2503.06074\">when advising on suspected pheochromocytoma, it cited NICE guideline ng136 and BMJ Best Practice document bmj26<\/a>. That means users can verify whether AMIE\u2019s suggestions are consistent with accepted medical standards.<\/p>\n<p>The first lets users see how the system weighs evidence and reaches conclusions via a higher-level, human-readable account, and the second allows for verifying that its recommendations align with established standards.<\/p>\n<h3>More transparent human clinicians?<\/h3>\n<p>Human clinicians can talk through their decisions, respond to questions, and build trust through conversation. But much of their judgment relies on tacit expertise, mental shortcuts, and fast, intuitive processes.<\/p>\n<p>These forms of reasoning can be highly effective, but they\u2019re hard to unpack, even for the clinician themselves. Like all humans, clinicians sometimes act on intuition or experience without full access to the reasoning behind their choices.<\/p>\n<p>That\u2019s why <a href=\"https:\/\/www.cmu.edu\/dietrich\/philosophy\/docs\/london\/hastings.pdf\">some argue<\/a> that medicine has always involved a kind of \u201cblack box\u201d reasoning. We don\u2019t expect clinicians to have full access to their own tacit deliberations or how those link to clinical guidelines, let alone the cognitive or neural mechanisms behind those decisions. Instead, we trust their training, professional norms, and willingness to explain their thinking when prompted (which allows us to contest them).<\/p>\n<p>What\u2019s striking is not just that AMIE can offer explanations\u2014but that it does so more systematically than most human clinicians: it lays out its reasoning consistently, links it to clinical guidelines, and builds a coherent narrative across visits. In some ways, it offers a more legible and auditable form of care.<\/p>\n<p>And it does this to impressive effect: In a <a href=\"https:\/\/arxiv.org\/abs\/2503.06074\">blinded study<\/a> comparing AMIE\u2019s care plans with those of human primary care physicians, reviewers found that AMIE\u2019s treatment recommendations were more consistently aligned with clinical guidelines\u2014ranging from 89% to 93% across three visits, compared to 75% to 81% for the physicians.\u00a0AMIE\u2019s recommendations were also judged to be more precise, both in treatments and investigations, and this advantage held across multiple patient visits.<\/p>\n<p>Reviewers were also asked which plan they&#8217;d prefer for themselves or family. Again, without knowing who wrote the plan, they preferred AMIE\u2019s in 42% of cases, the human doctor\u2019s in only 8%, and had no preference in the rest.<\/p>\n<h3>The limits of \u2018thinking out loud\u2019<\/h3>\n<p>To be clear, AMIE doesn\u2019t eliminate the black box problem. As with all deep learning systems, its inner workings remain opaque. But also, the \u2018thinking out loud\u2019 aspect of its reasoning transparency may be deceptive.<\/p>\n<p>Recent <a href=\"https:\/\/www.anthropic.com\/research\/reasoning-models-dont-say-think\">work by Anthropic <\/a>suggests that while the \u2018thoughts\u2019 of reasoning models appear clear and coherent, their final outputs may still be relying on unacknowledged cues or shortcuts.<\/p>\n<p>In other words, AMIE\u2019s deliberations may look like thoughtful analysis, but at least some of it may just be well-structured justifications generated after the fact. This arguably mirrors human post-hoc rationalisation\u2014people often construct reasons for their decisions without full awareness of the real drivers, even if their intention is to be transparent about their deliberations.<\/p>\n<p>Still, this does suggest we should be cautious about over-interpreting its step-by-step explanations. In that light, its ability to ground its decision points in clinical guidelines (which is ultimately what allows us to verify and contest it) may deserve more weight than the appearance of deliberative reasoning alone.<\/p>\n<p>Even so, \u2018thinking out loud\u2019 appears to play a crucial role, as models that do so hallucinate far less in this context (<a href=\"https:\/\/arxiv.org\/pdf\/2503.06074\">0.7% vs. &gt;5% in non-reasoning versions<\/a>).<\/p>\n<p><strong>Other caveats and questions<\/strong><\/p>\n<ul>\n<li><strong>Simulated patients, not real ones:<\/strong> AMIE\u2019s performance was evaluated in controlled studies with simulated patient interactions. How it behaves in real-world clinical settings remains to be seen.<\/li>\n<li><strong>Guideline adherence isn\u2019t always enough:<\/strong> Clinical guidelines are valuable, but not infallible. They evolve, can vary across contexts, and don\u2019t always capture edge cases or systemic biases. Judging AI quality by adherence alone risks missing these subtleties.<\/li>\n<li><strong>Is post-hoc reasoning enough?<\/strong> If AMIE\u2019s step-by-step reasoning is partially confabulated rather than a reflection of how the reasoning process meaningfully <em>caused<\/em> the output, what kind of trust do systems like it warrant?<\/li>\n<li><strong>Does fluency create false confidence? <\/strong>As systems become more articulate, do we risk mistaking coherence for competence\u2014and trusting them more than we should? This reminds us that reasoning transparency is only one part of the trust equation.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<hr \/>\n<p>Author: Hazem Zohny<\/p>\n<p>Affiliations: Uehiro Oxford Institute, University of Oxford<\/p>\n<p>Competing interests: None declared<!--TrendMD v2.4.8--><\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Hazem Zohny A major worry with AI in healthcare is the &#8216;black box&#8217; problem: deep learning AIs reach conclusions without explaining how. In healthcare, where trust is essential, this is a serious problem. Recent AI developments challenge this worry. One example is Google\u2019s Articulate Medical Intelligence Explorer (AMIE), an LLM-based system for clinical conversations [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":354,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8068],"tags":[],"class_list":["post-4656","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Are \u2018AI doctors\u2019 becoming more transparent than human ones? - Journal of Medical Ethics blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Are \u2018AI doctors\u2019 becoming more transparent than human ones? - Journal of Medical Ethics blog\" \/>\n<meta property=\"og:description\" content=\"By Hazem Zohny A major worry with AI in healthcare is the &#8216;black box&#8217; problem: deep learning AIs reach conclusions without explaining how. In healthcare, where trust is essential, this is a serious problem. Recent AI developments challenge this worry. One example is Google\u2019s Articulate Medical Intelligence Explorer (AMIE), an LLM-based system for clinical conversations [...]Read More...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/\" \/>\n<meta property=\"og:site_name\" content=\"Journal of Medical Ethics blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-08T12:54:22+00:00\" \/>\n<meta name=\"author\" content=\"Hazem Zohny\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Hazem Zohny\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/\"},\"author\":{\"name\":\"Hazem Zohny\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/person\\\/e73152f6aa4e164c7d625d77cf4fed35\"},\"headline\":\"Are \u2018AI doctors\u2019 becoming more transparent than human ones?\",\"datePublished\":\"2025-04-08T12:54:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/\"},\"wordCount\":1248,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\"},\"articleSection\":[\"Artificial intelligence\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/\",\"name\":\"Are \u2018AI doctors\u2019 becoming more transparent than human ones? - Journal of Medical Ethics blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#website\"},\"datePublished\":\"2025-04-08T12:54:22+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2025\\\/04\\\/08\\\/are-ai-doctors-becoming-more-transparent-than-human-ones\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Are \u2018AI doctors\u2019 becoming more transparent than human ones?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#website\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\",\"name\":\"Journal of Medical Ethics blog\",\"description\":\"A blog to discuss the ethics of medicine in its many guises and formats.\",\"publisher\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\",\"name\":\"Journal of Medical Ethics blog\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/files\\\/2026\\\/04\\\/jme-logo.png\",\"contentUrl\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/files\\\/2026\\\/04\\\/jme-logo.png\",\"width\":200,\"height\":50,\"caption\":\"Journal of Medical Ethics blog\"},\"image\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/person\\\/e73152f6aa4e164c7d625d77cf4fed35\",\"name\":\"Hazem Zohny\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4d1bd322d5ff8e6f31a624003143903d0ba7346af8cdb1fd6b5e95924b92ece1?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4d1bd322d5ff8e6f31a624003143903d0ba7346af8cdb1fd6b5e95924b92ece1?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/4d1bd322d5ff8e6f31a624003143903d0ba7346af8cdb1fd6b5e95924b92ece1?s=96&d=mm&r=g\",\"caption\":\"Hazem Zohny\"},\"sameAs\":[\"https:\\\/\\\/www.practicalethics.ox.ac.uk\\\/people\\\/dr-hazem-zohny\"],\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/author\\\/zohnyh\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Are \u2018AI doctors\u2019 becoming more transparent than human ones? - Journal of Medical Ethics blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/","og_locale":"en_US","og_type":"article","og_title":"Are \u2018AI doctors\u2019 becoming more transparent than human ones? - Journal of Medical Ethics blog","og_description":"By Hazem Zohny A major worry with AI in healthcare is the &#8216;black box&#8217; problem: deep learning AIs reach conclusions without explaining how. In healthcare, where trust is essential, this is a serious problem. Recent AI developments challenge this worry. One example is Google\u2019s Articulate Medical Intelligence Explorer (AMIE), an LLM-based system for clinical conversations [...]Read More...","og_url":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/","og_site_name":"Journal of Medical Ethics blog","article_published_time":"2025-04-08T12:54:22+00:00","author":"Hazem Zohny","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Hazem Zohny","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/#article","isPartOf":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/"},"author":{"name":"Hazem Zohny","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/person\/e73152f6aa4e164c7d625d77cf4fed35"},"headline":"Are \u2018AI doctors\u2019 becoming more transparent than human ones?","datePublished":"2025-04-08T12:54:22+00:00","mainEntityOfPage":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/"},"wordCount":1248,"commentCount":0,"publisher":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization"},"articleSection":["Artificial intelligence"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/","url":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/","name":"Are \u2018AI doctors\u2019 becoming more transparent than human ones? - Journal of Medical Ethics blog","isPartOf":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#website"},"datePublished":"2025-04-08T12:54:22+00:00","breadcrumb":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2025\/04\/08\/are-ai-doctors-becoming-more-transparent-than-human-ones\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blogs.bmj.com\/medical-ethics\/"},{"@type":"ListItem","position":2,"name":"Are \u2018AI doctors\u2019 becoming more transparent than human ones?"}]},{"@type":"WebSite","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#website","url":"https:\/\/blogs.bmj.com\/medical-ethics\/","name":"Journal of Medical Ethics blog","description":"A blog to discuss the ethics of medicine in its many guises and formats.","publisher":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blogs.bmj.com\/medical-ethics\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization","name":"Journal of Medical Ethics blog","url":"https:\/\/blogs.bmj.com\/medical-ethics\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/logo\/image\/","url":"https:\/\/blogs.bmj.com\/medical-ethics\/files\/2026\/04\/jme-logo.png","contentUrl":"https:\/\/blogs.bmj.com\/medical-ethics\/files\/2026\/04\/jme-logo.png","width":200,"height":50,"caption":"Journal of Medical Ethics blog"},"image":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/person\/e73152f6aa4e164c7d625d77cf4fed35","name":"Hazem Zohny","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/4d1bd322d5ff8e6f31a624003143903d0ba7346af8cdb1fd6b5e95924b92ece1?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/4d1bd322d5ff8e6f31a624003143903d0ba7346af8cdb1fd6b5e95924b92ece1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4d1bd322d5ff8e6f31a624003143903d0ba7346af8cdb1fd6b5e95924b92ece1?s=96&d=mm&r=g","caption":"Hazem Zohny"},"sameAs":["https:\/\/www.practicalethics.ox.ac.uk\/people\/dr-hazem-zohny"],"url":"https:\/\/blogs.bmj.com\/medical-ethics\/author\/zohnyh\/"}]}},"_links":{"self":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts\/4656","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/users\/354"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/comments?post=4656"}],"version-history":[{"count":0,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts\/4656\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/media?parent=4656"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/categories?post=4656"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/tags?post=4656"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}