
Policy
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
The UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.

Media
The UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Michael Bennet has won the best student paper award at AGI 2023 for the second year running for his paper, "Emergent Causality and the Foundation of Consciousness," Read the full paper here.
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
In this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.

Events
The AIH Lab and Hong Kong Ethics Lab co-hosted "The Philosophy of AI: Themes from Seth Lazar" workshop at HKU on January 17.
On December 14 Seth Lazar gave a keynote talk to the NeurIPS workshop on Pluralistic Alignment.
Seth has been invited to give first Annual Arthur & Barbara Gianelli Lecture on The Philosophy of Science at St John’s University, in April 2025.
On December 14 Seth Lazar delivered a keynote talk on evaluating the ethical competence of LLMs to the NeurIPS Algorithmic Fairness through the Lens of Metrics and Evaluation workshop
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
On December 9 Seth gave a talk entitled 'Evaluating LLM Ethical Competence' at the HKU workshop on Linguistic and Cognitive Capacities of LLMs.
From December 1, 2024, to February 7, 2025 Seth will be undertaking a visting fellowship with the University of Hong Kong.
On December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.

Resources
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
MINT Lab’s Seth Lazar and PhD student Jake Stone have published a new paper in Noûs on the site of predictive justice.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Seth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Seth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
