How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
Read MoreTogether with Aaron Snoswell, Dylan Hadfield-Menell, and Daniel Kilov, Seth Lazar has been awarded USD50,000 to support work on developing a “moral conscience” for AI agents. The grant will start in April 2024, and run for 9-10 months.
Read MoreOur special issue of Philosophical Studies on Normative Theory and AI is now live. A couple more papers remain to come, but in the meantime you can find eight new papers on AI and normative theory here:
Read MoreMINT, together with ADM+S, held a stellar workshop on normative philosophy of computing at the Kioloa Coastal Campus, with papers from Jeff Howard, Rachel Sterken and Eliot Michaelson, Jenny Judge, Sina Fazelpour, Sarita Rosenstock, Luise Mueller, Megan Hyska and Raphael Milliere.
Read MoreSeth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreSeth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Read MoreSeth Lazar and the MINT lab were awarded USD50,000 to support normative philosophy of computing field-building. The funds will be used to support research workshops in Australia and overseas, such as this year’s normative philosophy of computing workshop at Kioloa Coastal Campus.
Read MoreSeth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
Read MoreSeth features on a new podcast episode about effective altruism by Hi-Phi Nation.
Read MoreSeth features on a new podcast episode about the risks of AI by Science Vs.
Read MoreJosh Glancy of the Sunday Times has written an in-depth profile on Oxford's Institute for Ethics in AI, in which he discusses the ethical issues raised by AI with members of the institute led by John Tasioulas, including a brief quote from Seth.
Read MoreOn January 26th and 27th, Professor Seth Lazar gave the Tanner Lectures on AI and Human Values, at Stanford University. The event was co-hosted by Stanford’s Institute for Human-Centered AI, and the McCoy Family Center for Ethics in Society.
Read More