How should we regulate child sex robots: restriction or experimentation?

By John Danaher

This post is part of a series on ethical and legal perspectives in sexual and reproductive health first posted on the BMJ Sexual and Reproductive Health blog.

In 2017, the Crown Prosecution Services (CPS) decided to clamp down on the importation of child sex dolls into the UK. In doing so, they faced a problem. There was no established legal rule that explicitly banned the purchase and sale of these items. Consequently, the CPS had to get creative. They turned to an old 1876 law – the Customs Consolidation Act – that banned the importation of “obscene” items into the UK. Arguing that child sex dolls were obscene items, the CPS successfully prosecuted several individuals for purchasing them online and having them shipped to the UK.

In doing this, the CPS argued that they were acting in the interests of child protection. They argued that the purchase of child sex dolls was not an isolated phenomenon. Individuals who purchased them were likely to engage with other forms of child pornography, which could, in turn, lead to or encourage offences against children in the real world.

Child sex dolls are inanimate, human-like artifacts used for the purposes of sexual stimulation and gratification. But, given current technological trends, it is quite likely that people will create animate and robotized forms of these dolls in the near future. They are already doing this with adult forms of sex dolls. This raises the obvious question: what should the legal system do about these devices? Should we follow the lead of the CPS and look to ban their development, sale and use? Or should we permit them to be created on the grounds that, unlike other forms of child pornography, the creation of a child sex robot or doll does not involve any direct harm to real children?

In my article, ‘Regulating Child Sex Robots: Restriction or Experimentation?’, I survey the possible answers to this question and make a specific case for pessimism about our capacity to adequately answer them. This pessimism is itself, I suggest, a reason to favour restricting these devices. One of the first things I point out is that how you feel about this issue is likely to depend on your default assumptions about the legitimate role of the law in human life. If you adopt a strongly libertarian attitude, you might be inclined to permit the development, sale and use of these devices. After all, if there is no direct or obvious harm to another, then there is no justification for state intervention. On the other hand, if you adopt a more paternalistic attitude, or embrace precautionary attitude to the regulation of new technologies, then you might be inclined to favour state intervention and restriction, even in the absence of direct harm. Perhaps on the grounds that encouraging such devices could lead to harm to others.

I initially make the case for an alternative view. I argue that there is a prima facie case to be made in favour of restricting practices that are both extremely offensive to the majority of people and morally corrosive to particular individuals. In other words, I make the prima facie case for a mild form of legal moralism in relation to child sex robots. Legal moralism is a controversial and well-debated idea. Many people of a liberal persuasion will tend to oppose it. But what I point out in the article is that some moralistic bans are difficult to reject, even for those who are staunchly liberal in their outlooks and attitudes (as I believe I am). So, for example, laws that prohibit the desecration of dead corpses strike many people as justifiable, even when such desecration causes no direct or indirect harm to others. It seems that there is something symbolically harmful about the act that warrants legal restriction.

The category of deeply offensive symbolic harms is difficult to define. This is a problem because we wouldn’t (or, at least, I wouldn’t) want to encourage excessive state scrutiny of that category. But although the category may be difficult to define, child sex robots would seem to be the paradigmatic example of something that fits within it. It is plausible to suppose that we can justifiably restrict them on this ground without sliding down a slippery slope to more paternalistic regulation of practices that harm no one other than the people who engage in them.

This argument for restrictive regulation of child sex robots is, however, just a prima facie argument and quite a weak one at that. Someone could easily come along and argue that although these devices are symbolically harmful and deeply offensive, they are, nevertheless, socially beneficial. Perhaps child sex robots serve a greater good? Perhaps they could be used to treat child sex offenders in the same way that methadone is used to treat heroin addicts. In July 2014, at a conference in Berkeley, the roboticist Ronald Arkin suggested that this is a hypothesis that might be worth investigating. Arkin’s hypothesis, if confirmed, would provide support for the idea that some specific uses of this technology should not be banned; in fact, they should be incentivized or encouraged by the legal system. But, in many ways, the truth or falsity of the Arkin hypothesis is a red herring. The immediate challenge for the regulatory system is whether regulations should be put it place to facilitate its testing.

This is where my pessimism comes into play. I am not convinced that we should encourage the testing of the Arkin hypothesis because I don’t believe that we will come to a satisfactory conclusion about it. There are three reasons for this pessimism. First, past experience with analogous empirical debates (e.g. the link
between use of violent pornography and real-world sexual offending) do not bode well for this investigation. Those debates are marred by both politicized and polarized research findings, often without clear policy implications. Second, ongoing scandals about the institutional biases and weaknesses of scientific research suggest that we may not be well-equipped to find out the truth about something like Arkin’s hypothesis. Third, and finally, therapeutic interventions for child sex offending are already exceptionally difficult to adequately test. There is no reason to think it will be any easier if the therapeutic intervention
involves child sex robots.

I conclude by suggesting that this pessimism about our capacity to test the Arkin hypothesis is perhaps itself a reason to favour restrictive regulation of child sex robots. If we are going to open this Pandora’s Box, we should do so with utmost caution.

 

Regulating Child Sex Robots: Restriction or Experimentation? by John Danaher was published in the Medical Law Review

(Visited 5,396 times, 1 visits today)