Scroll
Normative Philosophy of Computing
‘AI’ is understood one way by computer scientists, another by industry, yet another by governments and the general public. It describes a broad suite of methods for imbuing computational systems with the ability to perceive their environment and act so as to realise some particular goals. AI’s impact is greatest when it learns dynamically and is able to adapt to new scenarios which exceed what its programmers anticipated. But many of the same moral and political problems are raised by much simpler, more deterministic, rules-based systems.
Better instead to focus on particular computational systems, and explore their impacts on society. Algorithmic intermediaries, for example, use some of the most advanced techniques in Machine Learning to mediate our social relations, gradually supplanting their physical counterparts with the algorithmic public square, algorithmic markets, and algorithmic sociality. Predictive analytics, often (but not always) supercharged by ML, are used by governments to try to better target resources, in the (often disappointed) hopes of reducing costs, and perhaps also reducing bias. Autonomous robotic systems—from self-driving cars to care robots and lethal autonomous weapons—integrate software and hardware in the pursuit of capabilities that meet or exceed human standards in the same fields. Software and hardware also unite in new systems for control, including bossware, surveillance tech, and algorithmic bailiffs. And now Large Language Models are enabling an unending array of new applications, from chatbots to universal AI assistants, to adaptable, autonomous AI agents.
AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.
If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.
There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.
Research Themes
New Writing
Our Work
News
Featured
In a forthcoming paper in Philosophy and Phenomenological Research, A.G. Holdier examines how certain types of silence can function as communicative acts that cause discursive harm, offering insights into the pragmatic topography of conversational silence in general.
In a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.
MINT Lab affiliate David Thorstad examines the limits of longtermism in a forthcoming paper in the Australasian Journal of Philosophy. The study introduces "swamping axiological strong longtermism" and identifies factors that may restrict its applicability.
The Machine Intelligence and Normative Theory (MINT) Lab at the Australian National University has secured a USD 1 million grant from Templeton World Charity Foundation. This funding will support crucial research on Language Model Agents (LMAs) and their societal impact.
Prof. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
The Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
This workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
In her upcoming paper in Philosophers’ Imprint Claire Benn examines the ethical implications of deepfake pornography. Benn analyses the issue through the lens of consent, arguing that using anyone's likeness in such content requires permission, even when AI-generated. Read the paper here.
Michael Bennet has won the best student paper award at AGI 2023 for the second year running for his paper, "Emergent Causality and the Foundation of Consciousness," Read the full paper here.
A new paper by Andrew Smart and Atoosa Kasirzadeh in AI & Society titled "Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning" explores the importance of social context in explaining machine learning outputs.
MINT alum Pamela Robinson has published a new paper in Philosophical Studies titled "The AI-design regress", addressing challenges in designing AI systems for moral decision-making. The full paper can be accessed here.
The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
In this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.
The Machine Intelligence and Normative Theory (MINT) Lab has been awarded a US$480,000 grant from the Survival and Flourishing DAF (Donor Advised Fund). This gift will support research by the MINT lab into sociotechnical AI safety—the integration of multidisciplinary perspectives with technical research on mitigating direct risks caused by AI systems operating without immediate human supervision.
As AI continues to permeate every aspect of our society, the evolution of adversarial machine learning from a niche academic field to a critical component of global cyber security underscores the significance of this research.
Seth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
The UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
Seth was invited on the The Gradient podcast to discuss the risks, challenges and benefits of developing publicly-minded AI, as well as the philosophical challenges those questions pose.
Seth was invited on the Generally Intelligent podcast to discuss issues of power, legitimacy, and the political philosophy of AI.
Seth has completed a book chapter forthcoming with MIT Press. The book is Collaborative Intelligence: How Humans and AI are Transforming our World, edited by Arathi Sethumadhavan and Mira Lane.
Seth was invited to deliver the Scholl Lecture at Purdue University on 3 April. Seth’s presentation focused on how we should currently respond to the kind of catastrophic risks posed by AI systems, which often dominate contemporary discourse in the normative philosophy of computing.
Michael Barnes presented at the Second Annual Penn-Georgetown Digital Ethics Workshop. The presentation (co-authored with Megan Hyska, Northwestern University) was titled “Interrogating Collective Authenticity as a Norm for Online Speech,” and it offers a critique of (relatively) new forms of content moderation on major social media platforms.
On 23 March 2024 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the American Philosophical Association's 2024 Pacific Division Meeting in Portland, Oregon.
How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
MINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
MINT is teaming up with colleagues in the US to edit a special section of the Journal of Responsible Computing on Barocas, Hardt and Narayanan’s book on Fair Machine Learning: Limitations and Opportunities.
Together with Aaron Snoswell, Dylan Hadfield-Menell, and Daniel Kilov, Seth Lazar has been awarded USD50,000 to support work on developing a “moral conscience” for AI agents. The grant will start in April 2024, and run for 9-10 months.
Our special issue of Philosophical Studies on Normative Theory and AI is now live. A couple more papers remain to come, but in the meantime you can find eight new papers on AI and normative theory here:
Seth Lazar and lead author Nick Schuster published a paper on algorithmic recommendation in Philosophical Studies.
Seth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
Events
How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
MINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Recent progress in LLMs has caused an upsurge in public attention to the field of AI safety, and growing research interest in the technical methods that can be used to align LLMs to human values. At this pivotal time, it is crucial to ensure that AI safety is not restricted to a narrowly technical approach, and instead also incorporates a more critical, sociotechnical agenda that considers the broader societal systems of which AI is always a part. This workshop brought together some of the leading practitioners of this approach to crystallize it and support further integration into both research and practice.
Nick Schuster represented MINT in organising a successful Dagstuhl seminar on responsible robotics.
On January 26th and 27th, Professor Seth Lazar gave the Tanner Lectures on AI and Human Values, at Stanford University. The event was co-hosted by Stanford’s Institute for Human-Centered AI, and the McCoy Family Center for Ethics in Society.
Scholars from around the world gathered for the first official Philosophy, AI, and Society (PAIS) Workshop at Stanford University on January 27th and 28th. The program featured 24 presentations on a wide variety of topics including online privacy, speech muffling on digital platforms, fairness metrics for algorithms, the impact of video deepfakes on evidence, moral self-correction in large langue models, the ethics of future porn, and many more.
Brian Hedden of the ANU and Katie Creel of Northeastern University organized a workshop on Fairness and Machine Learning: Limitations and Opportunities on January 23rd at Stanford University. Fairness and Machine Learning is co-authored by Solon Barocas, Moritz Hardt, Arvind Narayanan.
Seth Lazar has been invited to give the Tanner Lectures in AI and Human Values, at Stanford, in January 2023. The lectures will be hosted by the Human-Centered AI Institute and the McCoy Centre for Ethics in Society.