Scroll
Normative Philosophy of Computing
‘AI’ is understood one way by computer scientists, another by industry, yet another by governments and the general public. It describes a broad suite of methods for imbuing computational systems with the ability to perceive their environment and act so as to realise some particular goals. AI’s impact is greatest when it learns dynamically and is able to adapt to new scenarios which exceed what its programmers anticipated. But many of the same moral and political problems are raised by much simpler, more deterministic, rules-based systems.
Better instead to focus on particular computational systems, and explore their impacts on society. Algorithmic intermediaries, for example, use some of the most advanced techniques in Machine Learning to mediate our social relations, gradually supplanting their physical counterparts with the algorithmic public square, algorithmic markets, and algorithmic sociality. Predictive analytics, often (but not always) supercharged by ML, are used by governments to try to better target resources, in the (often disappointed) hopes of reducing costs, and perhaps also reducing bias. Autonomous robotic systems—from self-driving cars to care robots and lethal autonomous weapons—integrate software and hardware in the pursuit of capabilities that meet or exceed human standards in the same fields. Software and hardware also unite in new systems for control, including bossware, surveillance tech, and algorithmic bailiffs. And now Large Language Models are enabling an unending array of new applications, from chatbots to universal AI assistants, to adaptable, autonomous AI agents.
AI and related computational technologies are already embedded in almost every corner of our lives. Their harms are already becoming viscerally apparent. They demand answers, first, to the question of what, if anything, we should use these novel tools to do. But we must also ask who should decide the answer to that first question—and how the power that they exercise by way of AI should be constrained.
If our answer to these questions is not simply the abolitionist call to stop building AI at all, then we need to know just how its use can be justified. To do this, we need to develop a robust new subfield: the normative philosophy of computing. What’s more, the advent of AI in society raises first-order questions in normative philosophy which cannot simply be answered by applying an off-the-shelf solution. The search for low-hanging fruit invariably leads to critical errors and mistranslations. And much existing philosophical work on AI connects only coincidentally to actual AI practice, and rarely makes fundamental headway in philosophy itself.
There is an urgent need for empirically- and technically-grounded, philosophically ground-breaking work on the normative philosophy of computing. The MINT lab exists to fill that need—through its own work, and through fostering an international community of like-minded researchers. If you’re interested in learning more about this growing field, then reach out here, and join our mailing list here.
Research Themes
New Writing
Our Work
News
Featured
Seth was invited on the Generally Intelligent podcast to discuss issues of power, legitimacy, and the political philosophy of AI.
How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
MINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
MINT is teaming up with colleagues in the US to edit a special section of the Journal of Responsible Computing on Barocas, Hardt and Narayanan’s book on Fair Machine Learning: Limitations and Opportunities.
Together with Aaron Snoswell, Dylan Hadfield-Menell, and Daniel Kilov, Seth Lazar has been awarded USD50,000 to support work on developing a “moral conscience” for AI agents. The grant will start in April 2024, and run for 9-10 months.
Our special issue of Philosophical Studies on Normative Theory and AI is now live. A couple more papers remain to come, but in the meantime you can find eight new papers on AI and normative theory here:
Seth Lazar and lead author Nick Schuster published a paper on algorithmic recommendation in Philosophical Studies.
Seth Lazar aand former White House policy advisor Alex Pascal, assess democracy’s prospects in a world with AGI, in Tech Policy Press
The headline wasn’t representative, but this was a fun piece to write about the excellent White House OMB memo about AI use within government. Read the whole thing (not the misleading headline) here: https://www.theguardian.com/commentisfree/2023/nov/28/united-states-artificial-intelligence-eu-ai-washington#comments
One of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI.
MINT, together with ADM+S, held a stellar workshop on normative philosophy of computing at the Kioloa Coastal Campus, with papers from Jeff Howard, Rachel Sterken and Eliot Michaelson, Jenny Judge, Sina Fazelpour, Sarita Rosenstock, Luise Mueller, Megan Hyska and Raphael Milliere.
Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Events
How should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
MINT is teaming up with the HUMANE.AI EU project, represented by PhD student Jonne Maas, to support a workshop on political philosophy and AI, to take place at Kioloa Coastal Campus in June 2024.
Seth contributed to the Singapore Conference on AI with many other AI policy experts, designing and writing question 6: How do we elicit the values and norms to which we wish to align AI systems, and implement them?
Seth teamed up with Tino Cuéllar, the president of the Carnegie Endowment for International Peace to host a one-day workshop on AI and democracy, featuring legal scholars and political scientists, as well as policy-makers and AI researchers.
Recent progress in LLMs has caused an upsurge in public attention to the field of AI safety, and growing research interest in the technical methods that can be used to align LLMs to human values. At this pivotal time, it is crucial to ensure that AI safety is not restricted to a narrowly technical approach, and instead also incorporates a more critical, sociotechnical agenda that considers the broader societal systems of which AI is always a part. This workshop brought together some of the leading practitioners of this approach to crystallize it and support further integration into both research and practice.
Nick Schuster represented MINT in organising a successful Dagstuhl seminar on responsible robotics.
On January 26th and 27th, Professor Seth Lazar gave the Tanner Lectures on AI and Human Values, at Stanford University. The event was co-hosted by Stanford’s Institute for Human-Centered AI, and the McCoy Family Center for Ethics in Society.
Scholars from around the world gathered for the first official Philosophy, AI, and Society (PAIS) Workshop at Stanford University on January 27th and 28th. The program featured 24 presentations on a wide variety of topics including online privacy, speech muffling on digital platforms, fairness metrics for algorithms, the impact of video deepfakes on evidence, moral self-correction in large langue models, the ethics of future porn, and many more.
Brian Hedden of the ANU and Katie Creel of Northeastern University organized a workshop on Fairness and Machine Learning: Limitations and Opportunities on January 23rd at Stanford University. Fairness and Machine Learning is co-authored by Solon Barocas, Moritz Hardt, Arvind Narayanan.
Seth Lazar has been invited to give the Tanner Lectures in AI and Human Values, at Stanford, in January 2023. The lectures will be hosted by the Human-Centered AI Institute and the McCoy Centre for Ethics in Society.