Can artificial intelligence serve as an ethical decision-maker within committees?

By Kannan Sridharan & Gowri Sivaramakrishnan.

Artificial intelligence is widely being used in recent years in the health care industry. These systems learn to perform tasks that are commonly associated with human cognitive functions such as identifying patterns. Typically, these systems process massive amounts of data and look for patterns to model in their own decision-making.

There are already a number of research studies suggesting that AI is able to perform as well as, or better than humans at key healthcare tasks such as disease diagnosis. The majority of these studies claim that AI significantly reduces workforce requirements and is of considerable relevance in resource limited setting. One such setting is the Institutional Review Board or the IRB. Institutional Review Boards (IRBs) have been criticized for delays in approvals for research proposals due to inadequate or inexperienced IRB staff.

Artificial intelligence (AI), particularly large language models (LLMs) may have significant potential to assist IRB members in prompt and efficient reviewing process. An LLM is a deep learning algorithm that can perform a variety of natural language processing tasks such as recognize, translate, predict, or generate text or other content. They can be trained to solve text classification, question answering, document summarization, and text generation problems. This problem-solving capacity can particularly be used to review proposals submitted to IRB. They provide information in a clear, conversational style that is easy for users to understand. Some of the popular LLM’s include Chat GPT, Google Bard etc.

The results of our study published in the recent issue showed that LLM’s were able to identify errors in certain key elements of the research protocol. Prompting technique plays a key role to maximize the efficiency of LLM. One technique that exemplifies how LLMs can be more powerful is the “Chain of Thought” method. This involves breaking down a complex task into smaller chunks, and using LLM prompts to reason through each step. Another way to use LLM prompts in a chain is to combine multiple hyper-specific prompts to achieve better results. Our study also identifies multiple prompting led to better outputs in domains such as identifying the suitability of the placebo arm, risk mitigation strategies, and potential risks to study participants.

It is likely that LLMs can enhance the identification of potential ethical issues in clinical research, and they can be used as an adjunct tool to pre-screen research proposals and enhance the efficiency of an IRB.

 

Paper title: Leveraging Artificial Intelligence to Detect Ethical Concerns in Medical Research: A Case Study

Author: Kannan Sridharan & Gowri Sivaramakrishnan.

Affiliations: Arabian Gulf University & Ministry of Health, Kingdom of Bahrain.

Competing interests: None declared

(Visited 304 times, 1 visits today)