{"id":4774,"date":"2026-04-14T04:00:12","date_gmt":"2026-04-14T03:00:12","guid":{"rendered":"https:\/\/blogs.bmj.com\/medical-ethics\/?p=4774"},"modified":"2026-04-14T04:00:12","modified_gmt":"2026-04-14T03:00:12","slug":"advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots","status":"publish","type":"post","link":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/","title":{"rendered":"Advertising to the distressed: The commodification of mental health data in AI chatbots"},"content":{"rendered":"<p>By Nicole Gross and Hannah van Kolfschooten<\/p>\n<p>Generative Artificial Intelligence (genAI) chatbots have become an important outlet for many people around the world who are experiencing mental health issues. Of its 800 million weekly users, around 10 percent use ChatGPT for <a href=\"https:\/\/openai.com\/index\/affective-use-study\/\">emotional support<\/a> while more than one million use the chatbot to talk about issues such as depression, <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\">psychosis and suicidal ideation<\/a>. For many users, these conversations feel intimate, tailored and supportive, like talking to a good friend or therapist. Yet these interactions take place within commercial digital platforms whose business models rely on collecting, analysing and commodifying user data. As Big Tech companies explore the use of behavioural advertising in genAI systems, influenced by the global market for health data being worth an estimated <a href=\"https:\/\/www.businessresearchinsights.com\/market-reports\/healthcare-data-market-121419\">$74.41 bn<\/a>, profound ethical tensions arise between the commodification of user data and the protection of vulnerable individuals seeking mental health support.<\/p>\n<p>On one hand, the World Health Organization states that mental health is a <a href=\"https:\/\/www.who.int\/news-room\/questions-and-answers\/item\/mental-health-promoting-and-protecting-human-rights\">fundamental human right<\/a>, which means that everyone has the right to available, accessible, acceptable, and good-quality mental healthcare. As more than <a href=\"https:\/\/www.who.int\/news\/item\/02-09-2025-who-releases-new-reports-and-estimates-highlighting-urgent-gaps-in-mental-health\">1 billion people<\/a> live with mental health issues and many do not receive adequate care, more and more turn to genAI chatbots, which are often free, accessible and highly interactive. As AI chatbots are becoming quasi-therapeutic spaces, they exert an influence over the right to mental health. However, unlike traditional mental healthcare contexts where confidentiality and professional duties restrict the use of patient information, AI chatbots operate within a commercial ecosystem that is designed to maximise engagement and monetise user attention. This means that users pay a high yet often invisible price for their mental health.<\/p>\n<p>The commodification of mental health data works as follows: tech companies collect and analyse any information that appear in prompts, chat logs and other user interactions. While some chatbots, such as <a href=\"https:\/\/x.ai\/legal\/privacy-policy\">Grok<\/a>, claim that \u201cwe do not sell your personal information or use it for marketing\u201d, their privacy policies clearly show that aggregated and de-identified data is being \u2018shared\u2019 with affiliates and third parties, including advertisers and other business partners (Table 1). Grok started to run ads in <a href=\"https:\/\/www.ecommercenorthamerica.org\/2025\/08\/10\/x-grok-ai-chatbot-ads\/\">August 2025<\/a> and ChatGPT has been trialling ads since <a href=\"https:\/\/techcrunch.com\/2026\/02\/09\/chatgpt-rolls-out-ads\/\">February 2026.<\/a><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Table 1 \u2013 Privacy Policy Extracts from ChatGPT, Gemini, Grok, and CharacterAI<\/strong><\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"85\"><strong>Chatbot<\/strong><\/td>\n<td width=\"387\"><strong>Extracts from Privacy Policies <\/strong><\/td>\n<td width=\"129\"><strong>Link<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"85\">ChatGPT<\/td>\n<td width=\"387\">&#8220;To assist us in meeting business operations needs and to perform certain services and functions, we disclose Personal Data to vendors and service providers, including providers of hosting services, customer service vendors, cloud services, content delivery services, support and safety services, email communication software, web analytics services, payment and transaction processors, search and shopping providers, and information technology providers.&#8221;<\/p>\n<p>&nbsp;<\/td>\n<td width=\"129\"><a href=\"https:\/\/openai.com\/policies\/row-privacy-policy\/\">https:\/\/openai.com\/policies\/row-privacy-policy\/<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"85\">Google Gemini<\/td>\n<td width=\"387\">&#8220;We may share non-personally identifiable information publicly and with our partners \u2013 such as publishers, advertisers, developers or rights holders.&#8221;<\/p>\n<p>&nbsp;<\/td>\n<td width=\"129\"><a href=\"https:\/\/policies.google.com\/privacy\">https:\/\/policies.google.com\/privacy<\/a><\/td>\n<\/tr>\n<tr>\n<td width=\"85\">Grok<\/td>\n<td width=\"387\">&#8220;We may use your personal information for a variety of purposes&#8230;&#8221; &#8220;For example to develop new product features&#8230;to operate and expand our business activities, to identify new customers, and for data analysis.&#8221;<\/p>\n<p>&nbsp;<\/td>\n<td width=\"129\"><a href=\"https:\/\/x.ai\/legal\/privacy-policy\">https:\/\/x.ai\/legal\/privacy-policy<\/a><\/p>\n<p><strong>\u00a0<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"85\">CharacterAI<\/td>\n<td width=\"387\">&#8220;We use information for the following purposes:&#8221;&#8230; &#8220;Provide advertising and recruit new users, including provision of tailored advertising.&#8221;&#8221;<\/td>\n<td width=\"129\"><a href=\"https:\/\/policies.character.ai\/privacy\">https:\/\/policies.character.ai\/privacy<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>\u00a0<\/strong><\/p>\n<p>This type of <a href=\"https:\/\/cambridgeanalytica.org\/corporate-practices\/how-data-brokers-built-a-shadow-profile-of-you-without-your-knowledge-50509\/\">psychographic profiling<\/a> is highly problematic: by analysing prompts and chatlogs, genAI systems can infer sensitive information about a person\u2019s psychological state and emotional vulnerabilities. The ethical concern is that such signals can then be used to refine advertising strategies that target individuals precisely at moments when they are most vulnerable. In practice, this means that a user who confides to a chatbot that they feel anxious or lonely may unknowingly generate a behavioural profile that categorises them as a \u201cstress reactor,\u201d \u201ceasily deflated,\u201d or \u201creceptive to emotional messaging.\u201d A person who turns to a chatbot to talk about insomnia or anxiety may later be exposed to advertisements for sleep supplements or expensive therapy apps, for instance. Or someone discussing body image could be categorised as responsive to aspirational messaging and subsequently shown ads for cosmetic procedures, weight-loss programmes or confidence-boosting services. In these cases, deeply personal disclosures become inputs into a commercial system that predicts and commodifies behaviour.<\/p>\n<p>While users are warned in (admittedly lengthy and complex) T&amp;Cs that they should not include personal or sensitive information in their prompts, many people remain unaware just how much data genAI platforms collect and where their data goes, and this raises serious concerns about manipulation, exploitation and the right to privacy. Arguably, people could always choose not to use chatbots or read the T&amp;Cs carefully and set their privacy controls accordingly. However, this is an arduous process as users are typically opted-in by default and platforms may still personalise ads even when ad personalisation is <a href=\"https:\/\/help.openai.com\/en\/articles\/20001047-ads-in-chatgpt\">turned off<\/a>. Platform power and information asymmetries reinforce Big Tech\u2019s business models while rendering users choiceless over what happens their data once it enters the commercial ecosystem. At the same time, the ads users are exposed to may even exacerbate the very mental health struggles that led them to seek support in the first place.<\/p>\n<p>Most genAI systems are <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC10930608\/\">neither strictly risk-assessed nor cohesively regulated<\/a> globally. While the EU\u2019s regulatory frameworks &#8211; GDPR, the AI Act and the Digital Services Act &#8211; apply to genAI systems, none of these \u2018gold standard\u2019 regulations directly address the risks that arise when AI chatbots use personal and emotional disclosures in an advertising-driven ecosystem. The EU\u2019s recent proposal of a <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/digital-omnibus-regulation-proposal\">Digital Omnibus<\/a> is also further shift away from regulation and towards the endorsement of competitiveness. Without stronger safeguards, the commodification of emotionally vulnerable interactions risks turning moments of psychological distress into opportunities for commercial exploitation by chatbots.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Authors\/Affiliations<\/strong>:<\/p>\n<p>Nicole Gross, Associate Professor in Business &amp; Society, National College of Ireland<\/p>\n<p>Hannah van Kolfschooten, Postdoctoral Research Fellow, University of Basel<\/p>\n<p><strong>Conflicts of Interest<\/strong>: None to declare<!--TrendMD v2.4.8--><\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Nicole Gross and Hannah van Kolfschooten Generative Artificial Intelligence (genAI) chatbots have become an important outlet for many people around the world who are experiencing mental health issues. Of its 800 million weekly users, around 10 percent use ChatGPT for emotional support while more than one million use the chatbot to talk about issues [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":503,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8068,2152],"tags":[],"class_list":["post-4774","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-mental-health"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Advertising to the distressed: The commodification of mental health data in AI chatbots - Journal of Medical Ethics blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Advertising to the distressed: The commodification of mental health data in AI chatbots - Journal of Medical Ethics blog\" \/>\n<meta property=\"og:description\" content=\"By Nicole Gross and Hannah van Kolfschooten Generative Artificial Intelligence (genAI) chatbots have become an important outlet for many people around the world who are experiencing mental health issues. Of its 800 million weekly users, around 10 percent use ChatGPT for emotional support while more than one million use the chatbot to talk about issues [...]Read More...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/\" \/>\n<meta property=\"og:site_name\" content=\"Journal of Medical Ethics blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-14T03:00:12+00:00\" \/>\n<meta name=\"author\" content=\"owenschaefer\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"owenschaefer\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/\"},\"author\":{\"name\":\"owenschaefer\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/person\\\/aed22897c55740f89c1ad1508985d1c0\"},\"headline\":\"Advertising to the distressed: The commodification of mental health data in AI chatbots\",\"datePublished\":\"2026-04-14T03:00:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/\"},\"wordCount\":993,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\"},\"articleSection\":[\"Artificial intelligence\",\"Mental Health\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/\",\"name\":\"Advertising to the distressed: The commodification of mental health data in AI chatbots - Journal of Medical Ethics blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#website\"},\"datePublished\":\"2026-04-14T03:00:12+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2026\\\/04\\\/14\\\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Advertising to the distressed: The commodification of mental health data in AI chatbots\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#website\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\",\"name\":\"Journal of Medical Ethics blog\",\"description\":\"A blog to discuss the ethics of medicine in its many guises and formats.\",\"publisher\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\",\"name\":\"Journal of Medical Ethics blog\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/files\\\/2026\\\/04\\\/jme-logo.png\",\"contentUrl\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/files\\\/2026\\\/04\\\/jme-logo.png\",\"width\":200,\"height\":50,\"caption\":\"Journal of Medical Ethics blog\"},\"image\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/person\\\/aed22897c55740f89c1ad1508985d1c0\",\"name\":\"owenschaefer\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/183d62de3bcdcd8209a94ae1808c3a024d7b74e755a1c85df27517382b2b5b62?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/183d62de3bcdcd8209a94ae1808c3a024d7b74e755a1c85df27517382b2b5b62?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/183d62de3bcdcd8209a94ae1808c3a024d7b74e755a1c85df27517382b2b5b62?s=96&d=mm&r=g\",\"caption\":\"owenschaefer\"},\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/author\\\/owenschaefer\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Advertising to the distressed: The commodification of mental health data in AI chatbots - Journal of Medical Ethics blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/","og_locale":"en_US","og_type":"article","og_title":"Advertising to the distressed: The commodification of mental health data in AI chatbots - Journal of Medical Ethics blog","og_description":"By Nicole Gross and Hannah van Kolfschooten Generative Artificial Intelligence (genAI) chatbots have become an important outlet for many people around the world who are experiencing mental health issues. Of its 800 million weekly users, around 10 percent use ChatGPT for emotional support while more than one million use the chatbot to talk about issues [...]Read More...","og_url":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/","og_site_name":"Journal of Medical Ethics blog","article_published_time":"2026-04-14T03:00:12+00:00","author":"owenschaefer","twitter_card":"summary_large_image","twitter_misc":{"Written by":"owenschaefer","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/#article","isPartOf":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/"},"author":{"name":"owenschaefer","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/person\/aed22897c55740f89c1ad1508985d1c0"},"headline":"Advertising to the distressed: The commodification of mental health data in AI chatbots","datePublished":"2026-04-14T03:00:12+00:00","mainEntityOfPage":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/"},"wordCount":993,"commentCount":0,"publisher":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization"},"articleSection":["Artificial intelligence","Mental Health"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/","url":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/","name":"Advertising to the distressed: The commodification of mental health data in AI chatbots - Journal of Medical Ethics blog","isPartOf":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#website"},"datePublished":"2026-04-14T03:00:12+00:00","breadcrumb":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2026\/04\/14\/advertising-to-the-distressed-the-commodification-of-mental-health-data-in-ai-chatbots\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blogs.bmj.com\/medical-ethics\/"},{"@type":"ListItem","position":2,"name":"Advertising to the distressed: The commodification of mental health data in AI chatbots"}]},{"@type":"WebSite","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#website","url":"https:\/\/blogs.bmj.com\/medical-ethics\/","name":"Journal of Medical Ethics blog","description":"A blog to discuss the ethics of medicine in its many guises and formats.","publisher":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blogs.bmj.com\/medical-ethics\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization","name":"Journal of Medical Ethics blog","url":"https:\/\/blogs.bmj.com\/medical-ethics\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/logo\/image\/","url":"https:\/\/blogs.bmj.com\/medical-ethics\/files\/2026\/04\/jme-logo.png","contentUrl":"https:\/\/blogs.bmj.com\/medical-ethics\/files\/2026\/04\/jme-logo.png","width":200,"height":50,"caption":"Journal of Medical Ethics blog"},"image":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/person\/aed22897c55740f89c1ad1508985d1c0","name":"owenschaefer","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/183d62de3bcdcd8209a94ae1808c3a024d7b74e755a1c85df27517382b2b5b62?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/183d62de3bcdcd8209a94ae1808c3a024d7b74e755a1c85df27517382b2b5b62?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/183d62de3bcdcd8209a94ae1808c3a024d7b74e755a1c85df27517382b2b5b62?s=96&d=mm&r=g","caption":"owenschaefer"},"url":"https:\/\/blogs.bmj.com\/medical-ethics\/author\/owenschaefer\/"}]}},"_links":{"self":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts\/4774","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/users\/503"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/comments?post=4774"}],"version-history":[{"count":0,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts\/4774\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/media?parent=4774"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/categories?post=4774"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/tags?post=4774"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}