Seth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Read MoreThe UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
Read MoreSeth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
Read MoreThe headline wasn’t representative, but this was a fun piece to write about the excellent White House OMB memo about AI use within government. Read the whole thing (not the misleading headline) here: https://www.theguardian.com/commentisfree/2023/nov/28/united-states-artificial-intelligence-eu-ai-washington#comments
Read MoreOne of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI.
Read MoreSeth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Read MoreSeth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Read MoreSeth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Read MoreSeth Lazar joins the GETTING-Plurality group at Harvard University.
Read MoreSeth teams up with AI scientists to reply to the statement on the existential risk posed by AI for humans, suggesting that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.
Read MoreSeth Lazar invited to contribute to the ACOLA Rapid Research report on Large Language Models.
Read MoreSeth Lazar was a co-author on a report by a study committee of the Computer Science and Telecommunications Board of the US National Academies of Science, Engineering and Medicine, Fostering Computing Research: Foundations and Practices. This report was commissioned by the NSF and is to be presented to the US congress.
Read More