Research

AI AND POWER

Public and private actors are increasingly using secret, complex and inscrutable automated systems to shape our prospects, options and attitudes. Can these new and intensified power relations be justified? What would it take for such power to be exercised permissibly?

MORAL SKILL

Automation of human practices commonly leads to the degradation of our skilled performance of those practices. Will our growing reliance on AI and automation undermine morally important practical and theoretical skills? Can we design AI systems that make us better moral agents?

ETHICS FOR AI AGENTS

Advances with LLMs have made possible new kinds of AI agents, where a tool-using, augmented LLM is in executive control. We need to anticipate the societal impact of these agents, determine the right norms to guide their behaviour (and our use of them), and design policies and technical interventions to match.

SOCIOTECHNICAL AI SAFETY

As AI systems have become more capable, the field of AI Safety has come to the fore. It has focused, however, on narrowly technical means of shaping the behaviour of AI systems. Robust protection against AI risks requires a sociotechnical approach to set these systems in context, and identify the most promising points of intervention.

MINT whole gang.jpg

AI and Power

MINT Background iPhone.jpg

Ethics for AI Agents

MINT Background Angry.jpg

Moral Skill

MINT Background Mech.jpg

Sociotechnical AI Safety

MINT Background Thinking.jpg