By Steven R. Kraaijeveld, Hanneke van Heijster, Nadine Bol, and Kris E. Bevelander.
The rising costs of health care in Europe and many countries around the world have led to calls to use technology and digitalization to “drive more equitable and sustainable outcome for all”. Digitalizing parts of health care may not only reduce costs, but it could also allow people to access health care resources more easily compared to non-technological ways. Digital tools like chatbots and virtual assistants could be tailored, for example, to address the specific needs of users; and, unlike human beings, digital tools are always available.
Even though the possibilities of modifying digital tools to the specific care needs of patient groups are being increasingly considered, more work needs to be done to better understand the effects and desirability of such tools. Our team has therefore been involved in a multidisciplinary project to design a “sensitive” virtual assistant (SVA), which functions like an intelligent chatbot, to help people in vulnerable positions access care. Tailored digital tools may be particularly beneficial to accommodate diverse needs of people in vulnerable positions; for example, people who require long-term care and support in their daily lives, such as individuals with cognitive impairments. The SVA was designed in close collaboration with “experts-by-experience” and “citizen scientists” who are ultimately the ones who might benefit from the technology. Importantly, the SVA uses artificial intelligence (AI) to recognize and adapt ways of interacting to different types of users, thereby making it “sensitive” to the needs of different users.
At the same time, the ethics of using digital tools like virtual assistants for people in vulnerable positions also remains underexplored. To this end, we analysed focus group data that were collected within the larger project concerning the needs of long-term care recipients and their caregivers. We then considered the target group’s perspectives as part of a larger ethical reflection on the SVA and similar technologies.
In our paper, we discuss different ethical issues related to implementing the SVA and similar technologies in health care. We question, for instance, how desirable it would be to replace human contact in care especially for more complex care requests. We raise concerns about the fact that it may not always be clear who is responsible for the proper functioning of such technologies, particularly when they might cause harm. We finally call attention to the risk of excluding people with reduced verbal communication means and digital skills. Even when relevant target groups participate and are actively included in the design and development of these digital tools—as was the case in our project—we must pay close attention to those individuals for whom such tools remain unsuitable (for example, individuals with very low literacy and digital skills). Human contact should always remain an easy to access possibility to ask care-related questions, to make sure that a majority of users who are happy to use such tools does not end up masking struggling subgroups.
Digital tools like the SVA are rapidly developing and, while promising in many ways, are still in their infancy. We must continue to think carefully about how, when, and for whom they ought to be considered.
Paper title: Ethics of Using Virtual Assistants to Help People in Vulnerable Position Access Care
Authors: Steven R. Kraaijeveld, Hanneke van Heijster, Nadine Bol, and Kris E. Bevelander
Affiliations:
SRK: Amsterdam University Medical Centres
HvH: Tilburg University & Radboud University and Medical Centre
NB: Tilburg University
KEB: Radboud University and Medical Centre
Competing interests: None declared
Social media accounts of post authors:
SRK: @srkraaijeveld
NB: @Nadine_Bol