Seth was invited to deliver the Scholl Lecture at Purdue University on 3 April. Seth’s presentation focused on how we should currently respond to the kind of catastrophic risks posed by AI systems, which often dominate contemporary discourse in the normative philosophy of computing.
Read MoreHow should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
Read MoreTogether with Aaron Snoswell, Dylan Hadfield-Menell, and Daniel Kilov, Seth Lazar has been awarded USD50,000 to support work on developing a “moral conscience” for AI agents. The grant will start in April 2024, and run for 9-10 months.
Read MoreSeth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
Read MoreOne of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI.
Read MoreSeth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Read MoreSeth joined Arvind Narayanan and Sayash Kapoor in investigating the limitations of techniques for model alignment. Read more here: https://www.aisnakeoil.com/p/model-alignment-protects-against
Read MoreSeth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Read MoreRecent progress in LLMs has caused an upsurge in public attention to the field of AI safety, and growing research interest in the technical methods that can be used to align LLMs to human values. At this pivotal time, it is crucial to ensure that AI safety is not restricted to a narrowly technical approach, and instead also incorporates a more critical, sociotechnical agenda that considers the broader societal systems of which AI is always a part. This workshop brought together some of the leading practitioners of this approach to crystallize it and support further integration into both research and practice.
Read MoreNick Schuster represented MINT in organising a successful Dagstuhl seminar on responsible robotics.
Read MoreSeth teams up with AI scientists to reply to the statement on the existential risk posed by AI for humans, suggesting that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.
Read MoreSeth Lazar invited to contribute to the ACOLA Rapid Research report on Large Language Models.
Read More