Will AI Hasten a Dystopian Healthcare Landscape?

Reflection by Dr. Thanemozhi G. Natarajan PhD and Dr. Natarajan Ganesan PhD

In the movie Idiocracy (2006), a cryogenically frozen man named Joe wakes up in a distant future where the world has become a dystopian wasteland. One of the most striking scenes in the movie occurs when Joe goes to the doctor for a checkup. The doctor and the staff are completely clueless, subjecting Joe to ridiculous questions, such as why he doesn’t have a “tattooed” bar code. And they hand out probes to be inserted in all the wrong places.

Though the scenes were hilarious and satirical, a way of imagining a distant future, Joe’s reality looks like it is knocking at our door with the recent release of AI powered tools for public consumption. Not surprisingly, many top technology CEOs and leaders have started voicing their concerns. Others in the technology sector have chimed in, joining a growing conversation. How do these conversations affect those of us in healthcare fields, especially when AI has started making heavy inroads into healthcare?1


Losing Human Empathy and Increasing Healthcare Disparities

Fig. 1: Hamlet's paradox in a Schrodinger's world: Will there be a human touch? (Bing image creator)
Fig. 1: Hamlet’s paradox in a Schrodinger’s world: Will there be a human touch? (Bing image creator)

Increasing use of AI in healthcare has raised concerns about the potential loss of human empathy and compassion in patient care.2 Industry stakeholders fear that patients may be reduced to mere data points when AI algorithms are used to make treatment decisions (fig. 1). If healthcare becomes solely algorithm-based, patients may be treated more like machines, with little regard for their emotional and psychological needs or for their unique circumstances.

Additionally, AI in healthcare could exacerbate existing disparities in healthcare access and outcomes. AI algorithms rely on data sets to make decisions, and if they are biased, the resulting decisions will also be biased. For example, if AI algorithms train on data that do not represent the broadest possible population, this could lead to inaccurate diagnoses and treatments for certain groups of people, further perpetuating existing disparities in healthcare outcomes and ultimately resulting in poorer health outcomes for certain groups of patients. It is therefore essential that the use of AI in healthcare does not worsen disparities in healthcare access and outcomes.


Balancing AI’s Strengths with a Human Touch

As AI technology continues to expand rapidly, the potential benefits of AI in healthcare look promising, though they come with much uncertainty. AI algorithms can analyze large amounts of patient data to identify patterns and correlations that may not be visible to human doctors. Yet healthcare providers can improve patient outcomes in a personalized manner to tailor individual treatment modalities.

AI can be particularly useful in field data crunching for public health. AI-powered tools can analyze large amounts of data from social media and search engines to identify trends in public health concerns, i.e. outbreaks of infectious diseases or misinformation about vaccines, helping public health officials respond more quickly and effectively to emerging threats.

Despite these potential benefits, it is important to recognize that a personal touch goes above and beyond data analysis. Clichés such as “focus on patient needs” can apply only if healthcare professionals retain their human capacity for empathy. Yet defining empathy is not enough because it should be freely given and NOT simulated by an AI processing a set of instructions. Only humans can drive AI-powered tools while respecting patients’ privacy and autonomy. By doing so, we can ensure that AI is used to enhance rather than replace the human touch that is so critical to effective healthcare.


Recommendations for a Complementary Approach to Human/AI Relations

For AI in healthcare to benefit patients and society as a whole, industry stakeholders need to take several steps:

  1. First, AI algorithms must be developed and trained using unbiased data. Data must be collected from diverse populations to ensure that AI algorithms are accurate and unbiased.
  2. Second, healthcare providers must be transparent about the use of AI in healthcare. Patients should be informed when AI is being used to make decisions about their care, and they should have access to the data used to inform those decisions. Transparency ensures that patients can make informed decisions about their care and have a say in how providers use their data.
  3. Third, AI should support and augment human decision-making, not replace it. Healthcare providers are the first line of defense in maintaining this principle. Healthcare must remain personal and compassionate, so that patients receive care tailored to their individual needs.

AI has the potential to revolutionize healthcare in many positive ways. Yet we must ensure that this powerful technology benefits patients and society as a whole by developing unbiased algorithms, establishing transparency in the use of AI, and using AI responsibly.



[1] Pranav Rajpurkar, Emma Chen, Oishi Banerjee, and Eric J. Topol. “AI in Health and Medicine,” Nature Medicine 28, no. 1 (2022): 31-38, https://doi: 10.1038/s41591-021-01614-0.

[2] Sandeep Reddy, Sonia Allan, Simon Coghlan, and Paul Cooper, “A Governance Model for the Application of AI in Health Care. Journal of the American Medical Informatics Association 27, no. 3 (March 2020): 491-497, https://doi: 10.1093/jamia/ocz192.


About the Authors

Dr. Natarajan Ganesan leads Queromatics, a research-based, non-profit focused on precision healthcare. Dr. Thanemozhi G. Natarajan is a leading expert in cancer genomics and hereditary genetics.


(Visited 190 times, 1 visits today)