Policy
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
In this essay Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
Media
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Michael Bennet has won the best student paper award at AGI 2023 for the second year running for his paper, "Emergent Causality and the Foundation of Consciousness," Read the full paper here.
In this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.
Seth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Events
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
The Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
This workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
Andrew Smart and colleagues presented a tutorial session at FAccT 2024 that aims to broaden the discourse around AI safety beyond alignment and existential risks, incorporating perspectives from systems safety engineering and sociotechnical labour studies while emphasising participatory approaches.
Michael Barnes presented at the Second Annual Penn-Georgetown Digital Ethics Workshop. The presentation (co-authored with Megan Hyska, Northwestern University) was titled “Interrogating Collective Authenticity as a Norm for Online Speech,” and it offers a critique of (relatively) new forms of content moderation on major social media platforms.
On 23 March 2024 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the American Philosophical Association's 2024 Pacific Division Meeting in Portland, Oregon.
Resources
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
MINT Lab’s Seth Lazar and PhD student Jake Stone have published a new paper in Noûs on the site of predictive justice.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Seth features on a new podcast episode about the potential risks of AI with Philosophy Bites.
Seth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.