MINT Background colour.png

PEOPLE

MINT is led by Seth Lazar, Professor of Philosophy at the ANU and Distinguished Research Fellow at the University of Oxford. The team includes post-docs, PhD students, honours students, and affiliates, working across the moral and political philosophy of data and AI, and sociotechnical AI safety.

RESEARCH

MINT aims to make first order progress in the normative philosophy of computing and sociotechnical AI safety. Our projects range from moral psychology and moral epistemology as applied to LLMs, through the justification of political authority, to developing LLM evaluations and building AI agents.

ENGAGEMENT

MINT works closely with partners in industry and government, and around the world, to place empirically- and technically-grounded philosophy at the heart of AI Ethics and Safety.

JOIN MINT

MINT is a growing team, and we’re always interested in hearing from prospective PhD and honours students to work on MINTy projects, as well as potential new affiliates. Express your interest in participating here.

 
 
MINT Background Happy.jpg
 

Normative Philosophy of Computing

 

‘AI’ is understood one way by computer scientists, another by industry, yet another by governments and the general public. It describes a broad suite of methods for imbuing computational systems with the ability to perceive their environment and act so as to realise some particular goals. AI’s impact is greatest when it learns dynamically and is able to adapt to new scenarios which exceed what its programmers anticipated. But many of the same moral and political problems are raised by much simpler, more deterministic, rules-based systems.

Better instead to focus on particular computational systems, and explore their impacts on society. Algorithmic intermediaries, for example, use some of the most advanced techniques in Machine Learning to mediate our social relations, gradually supplanting their physical counterparts with the algorithmic public square, algorithmic markets, and algorithmic sociality. Predictive analytics, often (but not always) supercharged by ML, are used by governments to try to better target resources, in the (often disappointed) hopes of reducing costs, and perhaps also reducing bias. Autonomous robotic systems—from self-driving cars to care robots and lethal autonomous weapons—integrate software and hardware in the pursuit of capabilities that meet or exceed human standards in the same fields. Software and hardware also unite in new systems for control, including bossware, surveillance tech, and algorithmic bailiffs. And now Large Language Models are enabling an unending array of new applications, from chatbots to universal AI assistants, to adaptable, autonomous AI agents.

AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.

If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.

There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.


Research Themes

AI and Power

Ethics for AI Agents

Moral Skill

Sociotechnical AI Safety

 

 
 

New Writing

 
 
 
MINT Background iPhone.jpg

 News


MINT Background Mech.jpg