Workshop on Catastrophic AI Risk
In 2023 researchers and executives from almost all of the major AI companies warned the world that advanced Artificial Intelligence threatens to pose societal-scale risks, potentially rising to the level of an existential threat. And yet, in 2024, every major AI company now has the professed aim of achieving AGI (Artificial General Intelligence). What should we make of this dissonance? How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
This one-day colloquium from the Machine Intelligence and Normative Theory Lab at ANU will explore these questions from the perspective of empirically and technically grounded philosophy.
Program
What, if anything, should we do, now, about catastrophic AI risk? (10:30-12:00)
Seth Lazar (Philosophy, ANU)
Abstract: The recent acceleration in public understanding of AI capabilities has been matched by growing concern about its potentially catastrophic, even existential risks. From presidents and industry leaders, to policy wonks and 'thought leaders', the alarm about AI risks has been sounded—and the public is taking notice. Meanwhile, five (perhaps six) of the seven most valuable companies globally are pursuing Artificial General Intelligence, the very technology viewed by many as posing such risks. This cognitive dissonance raises questions: How should we think about catastrophic AI risk? And what, if anything, should we do? I will argue we must differentiate between risks posed by existing AI systems and their incremental improvements, and those contingent on a significant scientific breakthrough. We should prioritize the former not due to their greater stakes, but because understanding a technology is a prerequisite for mitigating its risks without incurring excessive costs. While we should not dismiss the risks posed by hypothetical future AI systems, our most prudent approach is to cultivate resilient institutions and adaptable research communities that can evolve as our knowledge expands.
What Remains of the Traditional AI X-Risk Argument? (1:00-2:30)
Cameron Domenico Kirk-Giannini (Philosophy, Rutgers)
Abstract: The most influential argument for the conclusion that advanced AI systems constitute a catastrophic risk to humanity is the Problem of Power-Seeking, developed by Nick Bostrom and Eliezer Yudkowsky among others. But recent and forthcoming work by a number of philosophers challenges the foundational assumptions of this argument. In light of these challenges, I reassess the strength of the case for AI-related catastrophic risk. I conclude that there is still cause for serious concern.
Against the singularity hypothesis (3:00-4:30)
David Thorstad (Philosophy, Vanderbilt)
Abstract: The singularity hypothesis is a hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on undersupported growth assumptions. I show how leading philosophical defenses of the singularity hypothesis (Chalmers 2010, Bostrom 2014) fail to overcome the case for skepticism. I conclude by drawing out philosophical and policy implications of this discussion.