A new proposal for regulating AI in the EU disregards the risks of AI when it comes to people’s health, writes Hannah van Kolfschooten
Earlier this year, the European Commission published its long awaited legislative proposal on artificial intelligence (AI): the Artificial Intelligence Act. With the proposed Artificial Intelligence Act, the European Commission has taken the first steps towards uniform rules on AI in the EU. The act aims to take a balanced approach to regulating AI, which ensures effective protection of fundamental rights without hindering AI’s socioeconomic benefits. The proposal fails, however, to address the health specific challenges that AI presents.
AI technology, and particularly its machine learning techniques, can be deployed to predict the best course of action in a specific context because of its capability to recognise patterns in large datasets. AI has been heralded as holding the promise to save billions of lives by improving the quality of healthcare, reducing costs, increasing accessibility of healthcare, and anticipating health emergency threats. However, given the unfathomable “black box” nature of AI, it may also affect fundamental rights, such as the rights to non-discrimination, privacy, and access to justice.
When AI is deployed in the context of health, patients are exposed to potential specific risks that could lead to physical or psychological harm—for instance, when racial bias in algorithms leads to incorrect diagnoses. The lack of transparency around how algorithms work also makes it difficult to provide patients with the relevant information they need to exercise their patients’ rights, such as informed consent. Plus, the dependence of AI on large amounts of (personal) data may affect medical data protection, because of patients’ limited control over use of their personal data and cyber security vulnerabilities in AI systems. All of this means that care should be taken when AI is applied in clinical or health settings, yet the proposal falls short in realising this caution.
The EU’s AI proposal takes a risk based approach to the regulation of AI: the higher the risk, the stricter the rule. Most of the requirements laid down in the act focus on “high risk” applications and include rules on transparency, accuracy, and data governance. The proposal labels AI systems used in specific areas, such as critical infrastructure, education, and law enforcement, as “high risk.” While the proposal stipulates that all devices falling under the Medical Devices Regulation (MDR) qualify as “high risk,” healthcare is nonetheless conspicuous in its absence from the list of high risk areas.
This is remarkable since healthcare forms one of the most popular sectors for AI deployment in the EU and is an inherently risky market because it deals with matters of the human body and life and death. The commission seems to have assumed that all AI applications used in the context of health are covered by the MDR. This assumption is false: the MDR only covers medical devices and software with an intended medical purpose, such as treatment of patients. This therefore excludes a lot of AI applications used in the realms of health, like many fitness and health apps (for example, apps to track medication) and administrative AI systems used by doctors in a hospital or other healthcare setting. These applications may, however, still present new challenges and possible risks to people, because of their (in)direct effects on the human body or the use of sensitive health data. Mobile pregnancy apps, for example, offer AI powered recommendations that will likely influence the reproductive health of users and process sensitive data on people’s health and life choices, yet they would not fall under the MDR and thus are not considered “high risk” under the proposed Artificial Intelligence Act.
This omission is foremost caused by the lack of a human-centric approach: the proposal centres on companies rather than people. The proposed act ignores the perspective of the “end users” or those affected by AI powered decisions. It mainly sets rules for developers and allows for companies to self-assess their conformity with regulation, yet it does not provide “end users” with the resources to guard themselves against the detrimental effects of AI. This regulatory approach to AI disregards the vulnerability of humans exposed to AI algorithms. This is especially harmful in the health and clinical context, where people are particularly susceptible to the risks of AI because of the inherent dependency and information asymmetries in the patient-doctor relationship. In comparison, the EU’s General Data Protection Regulation does empower citizens to control how their personal information is used by granting them extensive rights.
It is true that the EU has limited legal powers to regulate in the area of healthcare, but this does not absolve the EU from its responsibility to protect the fundamental rights of people when it comes to their health. In order to adequately protect people’s rights in the context of health-related AI, the EU must empower those affected by AI systems with effective and enforceable rights. In addition, health and healthcare must be included in the list of “high risk” areas. This is the only way that Europe can fully reap the benefits of AI in health and medical science as a whole.
Hannah van Kolfschooten researches patients’ rights protection in the regulation of AI at the Law Centre for Health and Life, University of Amsterdam.
Competing interests: none declared.