By Marieke Bak and Saar Hoek
When is technology helpful and when is it harmful? Can new technologies be used to create a better world, or are they only making matters worse? These are the questions that first crossed our minds when we came across deepfake therapy.
All deepfake technology uses deep learning—a form of artificial intelligence—to create hyper-realistic videos of people and situations, which can show scenarios that never actually happened. Tools to create deepfakes are increasingly easy to access and use, but developments are under way to help spot deepfaked videos. The rise of deepfake technology has downsides: politicians can be made to say anything, abuse and identity fraud take on new dimensions and the truth gets harder to spot. In healthcare, however, it can also be used for positive things.
There are promising opportunities for psychotherapy that uses deepfakes. Imagine patients with severe grief being able to process their feelings with the person they’ve lost, or those with trauma being able to reshape their memories. A pioneering pilot study employed deepfake therapy for victims of sexual violence-related PTSD. The therapist controlled deepfake images of perpetrators to create therapeutic scenarios where victims could confront their assailants. Initial findings suggest this method provides a space for victims to process their trauma in a controlled, supportive environment, when traditional therapies have been exhausted.
Similarly, in grief counselling, deepfakes could facilitate virtual interactions with deceased loved ones, which could be especially beneficial in cases of complicated grief. While there is no evidence yet on the effectiveness of deepfake therapy in this setting, a 2020 Dutch documentary film has shown the first experiment with deepfake therapy for bereaved individuals. In the documentary, the deepfake was voiced by trained therapists.
While deepfake technology may be promising for certain patients, it also raises significant ethical and legal questions. Deepfakes may be viewed as disturbing and are often associated with sci-fi stories. The grief counselling case is reminiscent of a particularly unsettling episode of the tv series Black Mirror, called ‘Be Right Back’, which depicts a woman trying to revive her deceased boyfriend, first as a chatbot based on his online chat history and later as a lifelike robot made of silicon. The first step is now actually possible with the use of large language models to create online personas of the dead, so-called deathbots or griefbots, which may change how people mourn. Visual deepfakes seem to be the next step into the uncanny valley.
In our article just published in the Journal of Medical Ethics, we discuss the ethical and legal aspects of deepfake therapy by zooming in on the two possible applications of sexual violence-related PTSD treatment and grief counselling. Initially, this conversation started when two of the authors used this topic in an elective bioethics course. For an advisory report assignment on emerging technologies, students interviewed legal scholars, who were inspired by the topic and the students’ insights. We then decided to team up as bioethicists and health lawyers to look into the ethical and legal aspects of deepfake therapy, resulting in an article describing two key questions.
First, we go back to the basics. What is ‘good care’? Does deepfake therapy qualify? Qualifying as good care entails adherence relevant medical law and to bioethical principles—respect for autonomy, non-maleficence, beneficence, and justice. Therapists must ensure that deepfake therapy promotes patients’ well-being without causing disproportionate harm. Potential harms discussed in the paper include the risk of over-attachment and the blurring of reality. Similar risks have already encountered with the use of chatbots for mental health problems, which can have potentially disastrous consequences. The relevant difference, however, is that deepfake therapy as it is now envisioned, would be controlled and overseen by a trained therapist. Therapists must carefully monitor the risks of deepfake therapy, tailoring their approach to each patient’s unique needs and vulnerabilities.
A different perspective is that of the deepfaked person. Does deepfake therapy safeguard privacy and consent sufficiently? In cases where the depicted person is deceased, the requirement of consent bring ethical and legal complexities. Legal frameworks like the General Data Protection Regulation do not generally extend to deceased individuals, but if a deceased person has explicitly objected to the use of their image for deepfake therapy, it would be unethical to override their wishes. On the other hand, if their preferences are unknown, the potential therapeutic benefits might justify the use. In the case of depicting perpetrators of sexual violence, therapists might also rely on the ‘legitimate interest’ of the patient. However, safeguards such as data security, combined with clear therapeutic benefits are crucial to justify the use of someone’s image without explicit consent.
As deepfake technology evolves, its integration into mental health care requires thorough public dialogue and further research on effectiveness and acceptability. Deepfake therapy must be guided by ethical and legal standards to safeguard patient well-being and societal trust, and additional issues of accessibility and environmental impact must be addressed to ensure the good use of this emerging technology.
Deepfake technology presents a fascinating yet challenging frontier in psychotherapy. Its ability to create realistic, controlled therapeutic scenarios offers new avenues for treating trauma and grief, but this should be guided by ongoing ethical-legal reflection, for which our article serves as a starting point.
Author(s): Saar Hoek (1), Suzanne Metselaar (2), Corrette Ploem (1,2), Marieke Bak (2, 3)
Affiliations: (1) Law Centre for Health and Life, Faculty of Law, University of Amsterdam, Netherlands; (2) Department of Ethics, Law & Humanities, Amsterdam UMC, Netherlands; (3) Institute for History and Ethics of Medicine, Technical University of Munich, Germany
Competing interests: None declared
Social media accounts of post author(s): https://nl.linkedin.com/in/mariekebak and https://nl.linkedin.com/in/saar-hoek-2b191a147