Beyond the robot apocalypse

By Nancy S. Jecker, Caesar A. Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky, Anita Ho.

In Christopher Nolan’s film, Oppenheimer, the protagonist frets that unleashing atomic energy will forever alter the world, making humankind’s annihilation possible. Some philosophers and many tech leaders fret AI has similar prospects –it imperils “humankind as a whole,” writes Nick Bostrom, and creates risk for “an adverse outcome so bad (that) it would either “annihilate Earth-originating intelligent life or permanently or drastically curtail its potential.”

AI doomsayers frequently buttress their claims by appealing to the philosophies of effective altruism and longtermism. Effective altruism instructs us do as much good as we can, while longtermism urges focusing on the well-being of future generations, given their sheer number.

The danger of this kind of catastrophizing is that it sells too well. As it gains momentum, the voices of doomsayers push aside non-doomsday threats happening now, making them appear trifling. AI hallucinations, displacement of human creative work, misinformation, and privacy hazards look inconsequential when stacked up against human annihilation. Why worry about algorithmic bias when we face a robot apocalypse?

Yet, even if non-existential threats fall short of completely obliterating us, they are hardly inconsequential. As we transition to more AI-centered societies, embedding fairness in the transition process demands directing our gaze to the here and now.

Just transitioning calls for focusing “not just on AI safety but on AI justice.” Efforts to improve fairness, accountability and transparency in algorithms used in decision automation foster just transitioning. So do efforts to shift from purely extractive to more regenerative economies. While an extractive economy is focused on “extraction of data from our bodies, our minds, our purchases, our art, our statements, our actions, and our locations to fuel…outputs,” a regenerative economy is committed to engaging stakeholders, becoming accountable, and respecting people and the planet.

Some philosophers critical of effective altruism argue that its rock bottom principles are utilitarian, stressing impartiality, wellbeing, and maximization. They charge that these values are fundamentally at odds with social justice movements that take seriously the standpoints of those who are oppressed. While the trillions of people living in the future count too, ethics is about more than doing the math.

A broader view of what is ‘effective’ and ‘altruistic’ is needed. It must be premised upon ensuring people living now can lead minimally good lives, addressing global asymmetries in wealth and power, and cultivating feminist insights about the ways AI oppresses people.

In an ironic plot twist, catastrophizing about the future steers investors to support philosophers working on AI’s future existential threats. Thus, “many longtermists identify studying artificial intelligence as a priority: a hostile AI might end the species and wipe out generations yet unborn.”

Longtermism and effective altruism too often display “an uncritical attitude toward existing political and economic institutions.” For example, Effective Altruism’s website champions the charity, “80,000 Hours,” as a way to select a high-paying career that is maximally effective. Singer praises Bill Gates and Warren Buffet as “the greatest Effective Altruists in human history.”

A better place to start an ethical critique of AI is interrogating the power consolidation of AI tech giants. A profit ethos can blindside us to algorithmic biases, like gender and racial discrimination, child sexual abuse, and labor exploitation, intensifying threats to disadvantaged communities.

The remedy for these problems is not to direct our gaze to the distant future, but to partner with people left behind and “elevate the voices, perspectives, and solutions of communities who directly experience the harms of AI.

 

Paper title: AI and the Falling Sky: Interrogating X-Risk

Authors: Nancy S. Jecker, Caesar A Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky, Anita Ho

Affiliations: Nancy S Jecker: University of Washington School of Medicine; Caesar Alimsinya Atuire: University of Oxford; Jean- Christophe Bélisle-Pipon: Simon Fraser University; Vardit Ravitsky: The Hastings Center; Anita Ho: University of British Columbia and University of California San Francisco

Competing interests: None declared

Social media accounts of post authors: Twitter. Nancy S Jecker: @profjecker; Caesar Alimsinya Atuire: @atuire; Jean-Christophe Bélisle-Pipon: @BelislePipon; Vardit Ravitsky: @VarditRavitsky; Anita Ho: @AnitaHoEthics

(Visited 769 times, 1 visits today)