AI and Power
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
In this essay Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
The UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
Seth was invited on the Generally Intelligent podcast to discuss issues of power, legitimacy, and the political philosophy of AI.
Ethics for AI Agents
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
In this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
In a new article in Inquiry, Vincent Zhang and Daniel Stoljar present an argument from rationality to show why AI systems like ChatGPT cannot think, based on the premise that genuine thinking requires rational responses to evidence.
In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
The Machine Intelligence and Normative Theory (MINT) Lab at the Australian National University has secured a USD 1 million grant from Templeton World Charity Foundation. This funding will support crucial research on Language Model Agents (LMAs) and their societal impact.
Moral Skill
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
On 23 March 2024 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the American Philosophical Association's 2024 Pacific Division Meeting in Portland, Oregon.
Our special issue of Philosophical Studies on Normative Theory and AI is now live. A couple more papers remain to come, but in the meantime you can find eight new papers on AI and normative theory here:
Seth Lazar and lead author Nick Schuster published a paper on algorithmic recommendation in Philosophical Studies.
On 18 September 2023 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the University of Leeds.
Seth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
Sociotechnical AI Safety
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
Andrew Smart and colleagues presented a tutorial session at FAccT 2024 that aims to broaden the discourse around AI safety beyond alignment and existential risks, incorporating perspectives from systems safety engineering and sociotechnical labour studies while emphasising participatory approaches.
In a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.
MINT Lab affiliate David Thorstad examines the limits of longtermism in a forthcoming paper in the Australasian Journal of Philosophy. The study introduces "swamping axiological strong longtermism" and identifies factors that may restrict its applicability.
A new paper by Andrew Smart and Atoosa Kasirzadeh in AI & Society titled "Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning" explores the importance of social context in explaining machine learning outputs.