AI has yet to fulfil its potential, but it could if we learn lessons from covid-19
Shortly after the covid-19 pandemic was announced, WHO emphasized the potentially prominent role that artificial intelligence (AI) could have in the public health response. Since the pandemic’s onset, innovative applications of AI have included detecting outbreaks, facilitating diagnosis, identifying people with fevers and accelerating gene sequencing, demonstrating that this non-medical intervention holds much promise in empowering the response to the global health crisis and future healthcare. However, despite its potential, AI has not achieved an impact on a global scale. It’s an opportune time to consider constraints and gaps, and draw lessons that can be learned to inform a sustainable, resilient, and citizen-centric future for AI.
Many of the challenges faced by AI in bettering health existed before the pandemic. While many serious issues have become more pressing in the situation whereas decisions needed to be made quickly, other new issues have emerged, for example, missing of epidemiological expertise in the design and interpretation of AI findings, lack of data availability in crisis contexts and mechanisms for risk mitigation. What we’ve learned is that new norms need to be established if the technology is to make a substantial impact in improving health during the pandemic and beyond.
Firstly, human intelligence and expertise need to be at the centre of all AI applications. This understanding is fundamental in dispelling hype surrounding AI and the first step in addressing the quality and accountability of AI algorithms. For example, the earliest detection of the outbreak using online data ignited interest in AI for prediction and tracking. Many companies subsequently adopted a “big data” approach using online data. However, they were not able to replicate early successes as the amount of information about covid-19 online increased drowning out signals in the data. Crucially, epidemiological expertise was often missing in the design and interpretation of AI findings. The role of human expertise will become even more important if health data, which has not been accessible to many companies so far, is increasingly used for prediction and tracking.
Secondly, new norms around data availability in crisis contexts need to be established. In the field of public health research and development, solidarity in data-sharing was recognized early in the pandemic. A number of effective data-sharing mechanisms were initiated, including the WHO’s Global Research on Coronavirus Disease Database, the European COVID-19 Data Platform by the European Commission, and the Cochrane COVID-19 Study Register. These scalable approaches for data have provided good support for AI innovations during the pandemic, even if some aspects of data management capacities need to be further developed, such as eliminating inequalities in data accessibility between countries.
The use of AI during the pandemic has also reignited a longstanding debate concerning the tension between surveillance and privacy around data that hasn’t been collected as part of research. While AI-assisted contact tracing and disease monitoring have helped with pandemic response in a few countries, like China, Republic of Korea, and Singapore, these approaches have encountered scepticism in many others. Legislation and regulation that can balance community wellbeing and individual privacy, need to be enacted to ensure these AI technologies, which have not been successfully rolled out so far, can play their role in the next phases of the pandemic. Effective contact tracing and disease surveillance would be a helpful component in returning societies back to a new normal, where surveillance and disease control measures need to be made dynamically with least disruption to societies.
Thirdly, the new norms concerning broad-based benefit and risk mitigation need to be established. The concepts of beneficence, nonmaleficence, and fairness of AI drew much attention before the pandemic, but mostly as an afterthought. AI innovations for the covid-19 response must include “ethics by design” going forward. Integrating ethical considerations into the process of developing AI solutions will ensure broad-based benefits while minimising harm. This can be achieved by developing clearer guidelines and standards in tandem with robust prequalification testing and verification systems. These were not in place before the pandemic, but efforts to bridge this gap have started. For example, in the middle of the pandemic, the first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan was issued by The U.S. Food and Drug Administration (FDA); the UK Presidency for the G7 has set recognized standards for AI as a priority in its health workstream. Lastly, it should not be forgotten that AI itself can be used as a powerful equalizer for healthcare, including in crisis-setting.
The pandemic has accelerated innovation of AI at a remarkable speed. There are serious questions that need to be resolved to unblock bottlenecks in its real-world application and wider roll out. The lessons learnt from the pandemic have helped the field dispel some of the hubris surrounding AI. However, we need to establish new norms and a long-term vision to realise the potential of AI.
Bernardo Mariano Junior, director, Department of Digital Health and Innovation, WHO
Sheng Wu, technical officer, Department of Digital Health and Innovation, WHO
Derrick Muneene, unit head, Department of Digital Health and Innovation, WHO
Competing interests: none declared.
Disclaimer: The authors are staff members of the World Health Organization. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy or views of the World Health Organization.
This article is part of our Artificial Intelligence and covid-19 collection.