The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
Read MoreIn this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.
Read MoreSeth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Read MoreSeth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
Read MoreThe headline wasn’t representative, but this was a fun piece to write about the excellent White House OMB memo about AI use within government. Read the whole thing (not the misleading headline) here: https://www.theguardian.com/commentisfree/2023/nov/28/united-states-artificial-intelligence-eu-ai-washington#comments
Read MoreOne of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI.
Read MoreSeth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Read MoreSeth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Read MoreSeth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Read MoreSeth Lazar joins the GETTING-Plurality group at Harvard University.
Read MoreSeth teams up with AI scientists to reply to the statement on the existential risk posed by AI for humans, suggesting that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.
Read MoreSeth Lazar invited to contribute to the ACOLA Rapid Research report on Large Language Models.
Read More