With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Read MoreIn a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Read MoreProf. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Read MoreIn this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.
Read MoreSeth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Read MoreThe UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
Read MoreSeth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
Read MoreThe headline wasn’t representative, but this was a fun piece to write about the excellent White House OMB memo about AI use within government. Read the whole thing (not the misleading headline) here: https://www.theguardian.com/commentisfree/2023/nov/28/united-states-artificial-intelligence-eu-ai-washington#comments
Read MoreSeth joined Arvind Narayanan and Sayash Kapoor in investigating the limitations of techniques for model alignment. Read more here: https://www.aisnakeoil.com/p/model-alignment-protects-against
Read MoreSeth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Read MoreSeth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
Read More