Policy
Seth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
The headline wasn’t representative, but this was a fun piece to write about the excellent White House OMB memo about AI use within government. Read the whole thing (not the misleading headline) here: https://www.theguardian.com/commentisfree/2023/nov/28/united-states-artificial-intelligence-eu-ai-washington#comments
One of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Seth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Media
Seth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
The headline wasn’t representative, but this was a fun piece to write about the excellent White House OMB memo about AI use within government. Read the whole thing (not the misleading headline) here: https://www.theguardian.com/commentisfree/2023/nov/28/united-states-artificial-intelligence-eu-ai-washington#comments
Seth joined Arvind Narayanan and Sayash Kapoor in investigating the limitations of techniques for model alignment. Read more here: https://www.aisnakeoil.com/p/model-alignment-protects-against
Seth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Seth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
Events
How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
MINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
MINT is teaming up with colleagues in the US to edit a special section of the Journal of Responsible Computing on Barocas, Hardt and Narayanan’s book on Fair Machine Learning: Limitations and Opportunities.
MINT, together with ADM+S, held a stellar workshop on normative philosophy of computing at the Kioloa Coastal Campus, with papers from Jeff Howard, Rachel Sterken and Eliot Michaelson, Jenny Judge, Sina Fazelpour, Sarita Rosenstock, Luise Mueller, Megan Hyska and Raphael Milliere.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Seth presented work at the Plurality Institute Summit, hosted by Danielle Allen at Harvard
Resources
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
MINT Lab’s Seth Lazar and PhD student Jake Stone have published a new paper in Noûs on the site of predictive justice.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Seth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Seth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.