Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreMINT Lab’s Seth Lazar and PhD student Jake Stone have published a new paper in Noûs on the site of predictive justice.
Read MoreSeth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Read MoreSeth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Read MoreSeth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Read MoreSeth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
Read MoreSeth features on a new podcast episode about effective altruism by Hi-Phi Nation.
Read MoreSeth teams up with AI scientists to reply to the statement on the existential risk posed by AI for humans, suggesting that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.
Read MoreSeth Lazar invited to contribute to the ACOLA Rapid Research report on Large Language Models.
Read MoreSeth features on a new podcast episode about the risks of AI by Science Vs.
Read MoreSeth Lazar has been invited to give the Tanner Lectures in AI and Human Values, at Stanford, in January 2023. The lectures will be hosted by the Human-Centered AI Institute and the McCoy Centre for Ethics in Society.
Read MoreSummary of recent work by Seth and other Minties on the moral and political philosophy of data and AI
Read More