{"id":3711,"date":"2020-03-16T02:16:31","date_gmt":"2020-03-16T01:16:31","guid":{"rendered":"https:\/\/blogs.bmj.com\/medical-ethics\/?p=3711"},"modified":"2020-03-16T02:25:15","modified_gmt":"2020-03-16T01:25:15","slug":"westworld-ethics-and-maltreating-robots","status":"publish","type":"post","link":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/","title":{"rendered":"Westworld, ethics and maltreating robots"},"content":{"rendered":"<p>By <a href=\"https:\/\/twitter.com\/ColinGavaghan\">Colin Gavaghan<\/a> and Mike King<\/p>\n<p>This week saw the return, for a third season, of the critically acclaimed HBO series <a href=\"https:\/\/www.hbo.com\/westworld\">Westworld<\/a>. WW\u2019s central premise in its first 2 seasons was a theme park, sometime in the near future, populated by highly realistic robots or \u2018hosts\u2019. Human guests can pay exorbitant sums to interact with these robots, in a huge range of ways. In the \u2018western\u2019 themed area \u2013 after which the show is named \u2013 guests can choose to be white-hatted heroes or black-hatted villains. The good guys get to be brave, chivalrous, honourable and generally decent. The bad guys, on the other hand, get to indulge in darkest pits of human depravity, including murder, torture and rape.<\/p>\n<p>Of course, they aren\u2019t <em>really<\/em> murdering, torturing and raping, because their \u2018victims\u2019 are just machines. Sure, they look human, and they are highly convincing in their reactions of pain, fear etc. But, the guests are assured, there\u2019s no actual suffering going on, because they \u2018hosts\u2019 aren\u2019t the sorts of things capable of suffering.<\/p>\n<p>It hopefully isn\u2019t too much of a spoiler to say that things turn out to be a bit different from that. But despite exciting developments in AI and robotics, we\u2019re still some way from creating thinking, feeling machines. It\u2019s also, surely, safe to say that, if we ever did create such beings, then we would owe them moral obligations not to treat them in the manner that the more sadistic guests to Westworld treat the hosts. Insofar as they would be capable of suffering, we would have duties not to treat them cruelly. And at least insofar as they had inner lives like our own, we would have duties not to kill them.<\/p>\n<p>Some academics, though, have recently started asking a different question. Might we have duties with regard to robots and AIs even if we know they <em>can\u2019t<\/em> feel or think? And if we do, could there be a case for these to be enforced via legal duties? These arguments aren\u2019t just about the super-sophisticated robots of Westworld, but about some <a href=\"https:\/\/en.wikipedia.org\/wiki\/Social_robot\">here-and-now \u2018social robots\u2019<\/a>, <a href=\"https:\/\/medicalfuturist.com\/the-top-12-social-companion-robots\/\">built to interact with us in various ways<\/a> including <a href=\"https:\/\/www.medicaldevice-network.com\/comment\/what-are-the-main-types-of-robots-used-in-healthcare\/\">providing medical care and support<\/a>, some with specific design to resemble <a href=\"http:\/\/www.parorobots.com\/\">cute live animals<\/a>.<\/p>\n<p>Any such duties wouldn\u2019t be owed to the robots themselves. There are different accounts of why we have duties, and to whom \u2013 or what \u2013 they can be owed, and we won\u2019t get into those here. For the purposes of this argument, we will stipulate that all robots we refer to, lacking any sort of internal mental life, are not morally considerable for their own stake. We\u2019ll also assume that these robots are very like us in form, and to some extent behaviour. They are humanlike, but still distinguishable from human persons.<\/p>\n<p>Given this, are there good reasons for us to have duties regarding such robots, or other reasons not to \u2018maltreat\u2019 them; that is to say to treat them in ways that would be morally and legally objectionable or impermissible?<\/p>\n<p><em>Reason 1: Heightened risk<\/em><\/p>\n<p>Behaving in such ways towards robots \u2013 particularly those which behave and react in humanlike ways \u2013 would heighten the likelihood that we would act in similar ways towards actual humans. This may be via a process of emotional hardening; we may become more callous towards human suffering by \u2018practising\u2019 on mock humans. Or it may be by strengthening our darker desires by indulging them in virtual settings.<\/p>\n<p>A different reason for risk being heightened is that there\u2019s greater chance of erroneous bad conduct. If harming of robots is permissible, we might falsely believe that we\u2019re maltreating a robot when in fact we\u2019re harming a human. In episode 2 of Westworld, Angela responds to a guest unsure if she is human or robot: \u201cWell if you can\u2019t tell, does it matter?\u201d If we can\u2019t reliably distinguish human from robot, then that might give a pretty good reason not to maltreat something that we <em>think<\/em> is a robot.<\/p>\n<p><em>Reason 2: Harm to observers<\/em><\/p>\n<p>For Kate Darling, the likelihood that others will be distressed by seeing us mistreating a humanlike robot would be reason enough not to behave in that way. Maybe even reason enough to ban such behaviour: \u201c<a href=\"https:\/\/books.google.co.nz\/books?id=7YpeCwAAQBAJ&amp;pg=PA230&amp;lpg=PA230&amp;dq=%22societal+desire+for+robot+protection+should+be+taken+into+account+and+translated+into+law+as+soon+as+the+majority+calls+for+it.%22&amp;source=bl&amp;ots=rh-LcpMM95&amp;sig=ACfU3U04ku3F6XVl3g42_Et_V6L06cinfg&amp;hl=en&amp;sa=X&amp;ved=2ahUKEwjHromqy5boAhWMzDgGHS_4CIoQ6AEwAHoECAEQAQ#v=onepage&amp;q=%22societal%20desire%20for%20robot%20protection%20should%20be%20taken%20into%20account%20and%20translated%20into%20law%20as%20soon%20as%20the%20majority%20calls%20for%20it.%22&amp;f=false\">societal desire for robot protection should be taken into account and translated into law as soon as the majority calls for it.<\/a>\u201d<\/p>\n<p>Whether most people would really feel this way isn\u2019t certain, but it\u2019s not implausible. Boston Dynamics\u2019 headless quadrupedal Spot doesn\u2019t look anything like a human, and not even much like an animal, but many us still feel an emotional response when we see it <a href=\"https:\/\/cdn.vox-cdn.com\/thumbor\/WqElvrcugB9pNZSgT1sfjLz_if4=\/49x0:589x360\/620x413\/filters:focal(49x0:589x360):gifv():no_upscale()\/cdn.vox-cdn.com\/uploads\/chorus_image\/image\/45681224\/kickedrobot17.0.0.gif\">being kicked around in \u2018stress tests.\u2019<\/a><\/p>\n<p>A harder question is whether the mere fact of majority disapproval should provide a persuasive reason for moral disapproval or legal prohibition. There\u2019s a long tradition of moral and legal theory addressing that question, going all the way back to John Stuart Mill, via the <a href=\"https:\/\/www.bl.uk\/collection-items\/wolfenden-report-conclusion\">Wolfenden Report.<\/a><\/p>\n<p>Writers like Joel Feinberg <a href=\"https:\/\/www.oxfordscholarship.com\/view\/10.1093\/0195052153.001.0001\/acprof-9780195052152\">have argued that causing offence can sometimes justify legal prohibition<\/a>. Even if it falls short of causing actual harm &#8211; Mill\u2019s famous threshold for justified criminalisation \u2013 offence is at least an unpleasant experience that we should (at least in non-trivial cases) be able to choose to avoid. Most legal systems have some sorts of rules against very offensive conduct in public. But the protection is against being unwillingly exposed to the behaviour, not against it happening at all.<\/p>\n<p>Maybe there would be a case, then, for a ban on mistreating humanlike robots \u2013 or even robots like Spot, if they invoke similar sorts of feelings \u2013 <em>in public<\/em>, or where people likely to be distressed by the sight (children, for example) would see. It\u2019s certainly not unknown for law to impose those kinds of restrictions on various forms of behaviour. Being offended by the mere idea that someone, somewhere, was doing something that turned our collective stomachs wouldn\u2019t be enough to justify a ban.<\/p>\n<p><em>Reason 3: Legal moralism<\/em><\/p>\n<p>Not all of the arguments for rules against such behaviour are focused on the bad consequences of such behaviours for human beings. \u2018Legal moralism\u2019 is the idea that, sometimes, the law might be justified in prohibiting some behaviours that don\u2019t cause any harm even in indirect ways. These behaviours should be prevented just because they are morally reprehensible, even if they cause no actual harm to anyone else. John Danaher has addressed a species of concern in relation to robots programmed for particularly troubling forms of sexual activity in his JME blog post: <a href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/02\/04\/how-should-we-regulate-child-sex-robots-restriction-or-experimentation\/\">sex robots that are designed for fantasies of child abuse or rape<\/a>. Danaher argues that a case could be made for banning such activities on the grounds they are harmful to the \u201cmoral character\u201d of those who carry them out.<\/p>\n<p>But that would involve a decision that the conduct in question is inherently morally wrong. And where it doesn\u2019t actually harm anyone, infringe their rights, or cause offence, what would be the basis of that conclusion? Guests in Westworld would presumably argue that the element that makes murder and cruelty immoral isn\u2019t present in the absence of an actual victim. The claim that it\u2019s immoral even to <em>pretend <\/em>to do those things needs some sort of justification \u2013 and raises a bunch of questions about those of us who enjoy shooting up opponents at paintball or Fortnite!<\/p>\n<p><em>Our concern<\/em><\/p>\n<p>We\u2019ve looked at three possible arguments in favour of duties to avoid mistreating humanlike robots. Whether they\u2019re persuasive will depend on evidence but also on the competing weights of the different concerns involved.<\/p>\n<p>But whether or not we find them persuasive, we want to suggest that there may be another reason to go cautiously when imposing rules around humanlike robots. It has to do with a point made by Kate Darling, about lifelike and alive becoming subconsciously muddled. That, in fact, might turn out to be one of the biggest threats posed by AI and robots.<\/p>\n<p>Depending on their <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/6628441\">design and capabilities<\/a>, we can engage with robots emotionally, and \u2018empathise\u2019 with them when they behave as though they are experiencing emotions. In <a href=\"https:\/\/www.theguardian.com\/society\/2014\/jul\/08\/paro-robot-seal-dementia-patients-nhs-japan\">some situations<\/a>, that might be a good thing. But it\u2019s also likely that it could render us susceptible to emotional <a href=\"https:\/\/plato.stanford.edu\/entries\/ethics-manipulation\/\">manipulation <\/a>in our interactions with some robots, <a href=\"https:\/\/www.theverge.com\/2018\/8\/2\/17642868\/robots-turn-off-beg-not-to-empathy-media-equation\">even when we know we\u2019re dealing with a robot<\/a>. As AI expert <a href=\"https:\/\/www.axa-research.org\/en\/project\/joanna-bryson\">Joanna Bryson has warned<\/a>, \u2018If people think of robots as humans, a lot of thing could go wrong. It could open them up for economic exploitation. They might think they need to protect the robot or feel badly for turning it off.\u2019<\/p>\n<p>If Bryson is right about that danger, then a skill we\u2019re likely to need for the future is the capacity to distinguish between thinking, feeling human beings or other animals, and AIs that merely resemble them: that are programmed to mimic their responses, but which have no internal life. Not only to distinguish between them intellectually, but emotionally as well. The AI chatbot trying to sell us its wares doesn\u2019t actually care if we buy them; the robot dog isn\u2019t really sad if we don\u2019t play with it (or refuse to buy it an expensive robot brother or sister!).<\/p>\n<p>This might involve a fine-tuning of our empathetic and sympathetic reactions. And that might mean turning down the emotional volume when dealing with humanlike robots and AIs. Doing so may reduce the strength of some of the reasons for prohibition of maltreatment of robots. We may be less likely to mistake a human for a robot in ethically risky ways, and we may be less offended by some treatment of robots.<\/p>\n<p>It doesn\u2019t, of course, mean that we have to treat them like the black-hatted guests on Westworld. But there might be a good reason to be wary of introducing moral taboos or legal rules that further blur the distinction between genuine claims on our moral concern, and false or manipulative claims on our insufficiently tuned instincts.<\/p>\n<p>&nbsp;<\/p>\n<p>Authors: Colin Gavaghan<sup>1<\/sup> and Mike King<sup>2<\/sup><\/p>\n<p>Affiliations:<\/p>\n<p><sup>1<\/sup> Faculty of Law, University of Otago, Dunedin, New Zealand<\/p>\n<p><sup>2<\/sup> Bioethics Centre, University of Otago, Dunedin, New Zealand<\/p>\n<p>Competing interests: None.<\/p>\n<p>Social media accounts of post authors: Colin Gavaghan <a href=\"https:\/\/twitter.com\/ColinGavaghan\">Twitter<\/a> <a href=\"https:\/\/www.facebook.com\/colin.gavaghan\">Facebook<\/a><!--TrendMD v2.4.8--><\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Colin Gavaghan and Mike King This week saw the return, for a third season, of the critically acclaimed HBO series Westworld. WW\u2019s central premise in its first 2 seasons was a theme park, sometime in the near future, populated by highly realistic robots or \u2018hosts\u2019. Human guests can pay exorbitant sums to interact with [&#8230;]<\/p>\n<p><a class=\"btn btn-secondary understrap-read-more-link\" href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":353,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8068,8057,8069],"tags":[],"class_list":["post-3711","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-medical-ethics","category-robots"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Westworld, ethics and maltreating robots - Journal of Medical Ethics blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Westworld, ethics and maltreating robots - Journal of Medical Ethics blog\" \/>\n<meta property=\"og:description\" content=\"By Colin Gavaghan and Mike King This week saw the return, for a third season, of the critically acclaimed HBO series Westworld. WW\u2019s central premise in its first 2 seasons was a theme park, sometime in the near future, populated by highly realistic robots or \u2018hosts\u2019. Human guests can pay exorbitant sums to interact with [...]Read More...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/\" \/>\n<meta property=\"og:site_name\" content=\"Journal of Medical Ethics blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-03-16T01:16:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-03-16T01:25:15+00:00\" \/>\n<meta name=\"author\" content=\"Mike King\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mike King\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/\"},\"author\":{\"name\":\"Mike King\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/person\\\/b107957622bc42b2097d15e5e02a112c\"},\"headline\":\"Westworld, ethics and maltreating robots\",\"datePublished\":\"2020-03-16T01:16:31+00:00\",\"dateModified\":\"2020-03-16T01:25:15+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/\"},\"wordCount\":1721,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\"},\"articleSection\":[\"Artificial intelligence\",\"Medical ethics\",\"Robots\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/\",\"name\":\"Westworld, ethics and maltreating robots - Journal of Medical Ethics blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#website\"},\"datePublished\":\"2020-03-16T01:16:31+00:00\",\"dateModified\":\"2020-03-16T01:25:15+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/2020\\\/03\\\/16\\\/westworld-ethics-and-maltreating-robots\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Westworld, ethics and maltreating robots\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#website\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\",\"name\":\"Journal of Medical Ethics blog\",\"description\":\"A blog to discuss the ethics of medicine in its many guises and formats.\",\"publisher\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#organization\",\"name\":\"Journal of Medical Ethics blog\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/files\\\/2026\\\/04\\\/jme-logo.png\",\"contentUrl\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/files\\\/2026\\\/04\\\/jme-logo.png\",\"width\":200,\"height\":50,\"caption\":\"Journal of Medical Ethics blog\"},\"image\":{\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/#\\\/schema\\\/person\\\/b107957622bc42b2097d15e5e02a112c\",\"name\":\"Mike King\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8caa7ddd85361ccfd46160d9dd41e9ff9aadde6fd8379b80c066d095d69f9f7b?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8caa7ddd85361ccfd46160d9dd41e9ff9aadde6fd8379b80c066d095d69f9f7b?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8caa7ddd85361ccfd46160d9dd41e9ff9aadde6fd8379b80c066d095d69f9f7b?s=96&d=mm&r=g\",\"caption\":\"Mike King\"},\"sameAs\":[\"https:\\\/\\\/www.otago.ac.nz\\\/bioethics\\\/people\\\/academic\\\/profile\\\/index.html?id=774\"],\"url\":\"https:\\\/\\\/blogs.bmj.com\\\/medical-ethics\\\/author\\\/mking\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Westworld, ethics and maltreating robots - Journal of Medical Ethics blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/","og_locale":"en_US","og_type":"article","og_title":"Westworld, ethics and maltreating robots - Journal of Medical Ethics blog","og_description":"By Colin Gavaghan and Mike King This week saw the return, for a third season, of the critically acclaimed HBO series Westworld. WW\u2019s central premise in its first 2 seasons was a theme park, sometime in the near future, populated by highly realistic robots or \u2018hosts\u2019. Human guests can pay exorbitant sums to interact with [...]Read More...","og_url":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/","og_site_name":"Journal of Medical Ethics blog","article_published_time":"2020-03-16T01:16:31+00:00","article_modified_time":"2020-03-16T01:25:15+00:00","author":"Mike King","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Mike King","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/#article","isPartOf":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/"},"author":{"name":"Mike King","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/person\/b107957622bc42b2097d15e5e02a112c"},"headline":"Westworld, ethics and maltreating robots","datePublished":"2020-03-16T01:16:31+00:00","dateModified":"2020-03-16T01:25:15+00:00","mainEntityOfPage":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/"},"wordCount":1721,"commentCount":0,"publisher":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization"},"articleSection":["Artificial intelligence","Medical ethics","Robots"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/","url":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/","name":"Westworld, ethics and maltreating robots - Journal of Medical Ethics blog","isPartOf":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#website"},"datePublished":"2020-03-16T01:16:31+00:00","dateModified":"2020-03-16T01:25:15+00:00","breadcrumb":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/2020\/03\/16\/westworld-ethics-and-maltreating-robots\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blogs.bmj.com\/medical-ethics\/"},{"@type":"ListItem","position":2,"name":"Westworld, ethics and maltreating robots"}]},{"@type":"WebSite","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#website","url":"https:\/\/blogs.bmj.com\/medical-ethics\/","name":"Journal of Medical Ethics blog","description":"A blog to discuss the ethics of medicine in its many guises and formats.","publisher":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blogs.bmj.com\/medical-ethics\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#organization","name":"Journal of Medical Ethics blog","url":"https:\/\/blogs.bmj.com\/medical-ethics\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/logo\/image\/","url":"https:\/\/blogs.bmj.com\/medical-ethics\/files\/2026\/04\/jme-logo.png","contentUrl":"https:\/\/blogs.bmj.com\/medical-ethics\/files\/2026\/04\/jme-logo.png","width":200,"height":50,"caption":"Journal of Medical Ethics blog"},"image":{"@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blogs.bmj.com\/medical-ethics\/#\/schema\/person\/b107957622bc42b2097d15e5e02a112c","name":"Mike King","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/8caa7ddd85361ccfd46160d9dd41e9ff9aadde6fd8379b80c066d095d69f9f7b?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/8caa7ddd85361ccfd46160d9dd41e9ff9aadde6fd8379b80c066d095d69f9f7b?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/8caa7ddd85361ccfd46160d9dd41e9ff9aadde6fd8379b80c066d095d69f9f7b?s=96&d=mm&r=g","caption":"Mike King"},"sameAs":["https:\/\/www.otago.ac.nz\/bioethics\/people\/academic\/profile\/index.html?id=774"],"url":"https:\/\/blogs.bmj.com\/medical-ethics\/author\/mking\/"}]}},"_links":{"self":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts\/3711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/users\/353"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/comments?post=3711"}],"version-history":[{"count":0,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/posts\/3711\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/media?parent=3711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/categories?post=3711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.bmj.com\/medical-ethics\/wp-json\/wp\/v2\/tags?post=3711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}