Scroll
Our Approach
Normative Philosophy of Computing
AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.
If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.
There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.
Sociotechnical AI Safety
AI Safety now plays a prominent role in both industry and government attempts to understand and mitigate the risks of advanced AI systems. Researchers have highlighted its constitutive shortcomings: that it is methodologically, demographically, ideologically and evaluatively too narrow for the task it has been set. But while many AI labs and national AI Safety Institutes acknowledge these critiques, they need better theoretical and empirical resources to address them.
Alongside our work in normative philosophy of computing, the MINT Lab is working on bringing a sociotechnical lens to AI safety—leading by example through theoretical and empirical work, but also developing a broader conceptual and practical toolkit for the field.
This is not about identifying a subset of harms/concerns as sociotechnical. Instead it's a lens one applies to the problems of AI safety in general: you can't adequately anticipate threats, weigh their likelihood, or intervene to mitigate them (whether ex ante or post hoc), unless you start from the premise that AI systems are inherently sociotechnical, ie that they are constituted not just by software + hardware but also by people, groups, institutions and structures (etc).
In other words, even narrowly technical interventions on AI systems should also be sociotechnical, in this sense: the intervention should ideally be guided by a broader understanding of the system that it is intervening in.
Research Themes
New Writing
Our Work
News
Featured
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.
In this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
On September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
In this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this paper, Seth Lazar and Lorenzo Manuali argue that that LLMs should not be used for formal democratic decision-making, but that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
In this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
In this essay Seth develops a democratic egalitarian theory of communicative justice to guide the governance of the digital public sphere.
In this essay Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
In a new article in Inquiry, Vincent Zhang and Daniel Stoljar present an argument from rationality to show why AI systems like ChatGPT cannot think, based on the premise that genuine thinking requires rational responses to evidence.
In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
In a forthcoming paper in Philosophy and Phenomenological Research, A.G. Holdier examines how certain types of silence can function as communicative acts that cause discursive harm, offering insights into the pragmatic topography of conversational silence in general.
In a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.
Events
In this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
On September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
In this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Professor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
In this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
The Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
This workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
Andrew Smart and colleagues presented a tutorial session at FAccT 2024 that aims to broaden the discourse around AI safety beyond alignment and existential risks, incorporating perspectives from systems safety engineering and sociotechnical labour studies while emphasising participatory approaches.