↓ Download PDF
01

cat cv/employment.md

Employment
11 positions
2025- Professor. School of Government and Policy, Johns Hopkins University
2025 Visiting Faculty Researcher. Google (0.2FTE, six-month appointment)
22-25 ARC Future Fellow. School of Philosophy, RSSS, ANU
2020- Professor. School of Philosophy, RSSS, ANU
18-21 Project Leader. Humanising Machine Intelligence Grand Challenge, ANU
17-19 Head of School. School of Philosophy, RSSS, ANU
2017-19 Associate Professor. School of Philosophy, RSSS, ANU
2015- Senior Research Fellow. School of Philosophy, RSSS, ANU
11-14 Continuing Research Fellow. School of Philosophy, RSSS, ANU
09-11 Research Associate, Institute for Ethics, Law, and Armed Conflict (ELAC), University of Oxford
07-09 Retained Lecturer in Political Theory, Pembroke College, Oxford
02

cat cv/education.md

Education
3 qualifications
06-09 D.Phil. Politics (Political Theory), Department of Politics, Oxford 'War and Associative Duties'. Supervisor Henry Shue, examined by Jeff McMahan and David Rodin
04-06 M.Phil. Politics (Political Theory), Department of Politics, Oxford Distinction. 'A Critical Analysis of Corrective Justice', supervised by David Miller
99-02 BA (Hons) English Language and Literature, Wadham College, Oxford First class honours.
03

cat cv/fellowships.md

Fellowships, Honours
19 honours
24-25 Visiting Professor, School of Philosophy, Hong Kong University
24-25 Senior AI Advisor, Knight First Amendment Institute, Columbia University
2024- Non-Resident Fellow at the Carnegie Endowment for International Peace
2023 Tanner Lectures on AI and Human Values, Human-Centred AI Institute and McCoy Center for Ethics in Society, Stanford University
2022 Mala and Solomon Kamm Lecture in Ethics, Safra Centre for Ethics, Harvard University
22-26 ARC Future Fellow. School of Philosophy, RSSS, ANU
2021- Distinguished Research Fellow, Institute for Ethics in AI and Faculty of Philosophy, University of Oxford (honorary position)
2021 Member of 14-person US National Academies of Science, Engineering and Medicine Study Committee on 'Ethics and Governance of Responsible Computing Research', led by Professor Barbara Grosz (Harvard)
2019 Vice-Chancellor's Award for Excellence in Research, ANU
2016 Academy of Social Sciences of Australia Early Career Researcher Commendation (Panel D: History and Philosophy)
13-15 ARC Discovery Early Career Research Award Fellow. School of Philosophy, RSSS, ANU
2012 Carnegie Ethics Council Global Ethics Fellowship
2011 Visiting fellow at programme on Sovereignty, Global Justice, and the Ethics of War, Institute of Advanced Studies (IAS), Hebrew University of Jerusalem
2011 American Philosophical Association Frank Chapman Sharp memorial prize for the best unpublished monograph on the philosophy of war and peace
2009 Res Publica Postgraduate Essay Prize for 2008
2008 Society for Applied Philosophy Annual Conference Postgraduate Essay Prize
2007 Social Science Division Teaching Excellence Award (Category A)
02-03 Harvard University, Frank Knox Fellowship
00-02 Wadham College, Schools Prize and Junior Scholarship
04

cat cv/books.md

Monographs, Edited Volumes, and Symposia
11 publications
2026 The Algorithmic City: Power, Justice, and AI Seth Lazar · Stanford Tanner Lectures on AI and Human Values (edited by Rob Reich, with responses by Marion Fourcade, Arvind Narayanan, Renee Jorgensen, and Josh Cohen)
2025 AI and Democratic Freedoms Seth Lazar · Knight First Amendment Institute Symposium (with Katy Glenn Bass)
2023 Normative Theory and AI Seth Lazar · Philosophical Studies (guest editor with Claire Benn, Todd Karhu, and Pamela Robinson)
2022 The Political Philosophy of Data and AI Seth Lazar · Canadian Journal of Philosophy (guest editor with Kate Vredenburgh and Annette Zimmermann)
2021 Proceedings of the 2021 AAAI/ACM Artificial Intelligence, Ethics, and Society Conference (Co-Editor, with Marion Fourcade, Ben Kuipers, Deirdre Mulligan) Seth Lazar
2020 Topical Collection on Norms for Risk Seth Lazar · Synthese (guest editor, with Alan Hájek and lead editor Renee Jorgensen)
2018 The Oxford Handbook of Ethics of War Seth Lazar · Oxford: Oxford University Press (Co-Editor, with Helen Frowe)
2017 Symposium on Ethics and Decision Theory Seth Lazar · Ethics (guest editor), 127/3
2016 Sparing Civilians Seth Lazar · Oxford: Oxford University Press
2014 The Morality of Defensive War Seth Lazar · Oxford: Oxford University Press. (Co-Editor, with Cécile Fabre)
2011 Symposium on Jeff McMahan's Killing in War Seth Lazar · Ethics (guest editor), 122/1
05

cat cv/papers.md

Published Articles, Pre-Prints, and Reports
72 papers
2026 Using LLMs to Advance Democratic Values Lazar, Lorenzo Manuali · Minds and Machines,

Assesses whether LLMs used for summarising deliberative content, aggregating opinion, and predicting voter preferences genuinely advance democratic values or merely simulate democratic processes. Argues that optimism outpaces evidence, and that careful normative analysis is needed before deploying LLMs in democratic institutions.

View source →
2026 Resource Rational Contractualism Should Guide AI Alignment Sydney Levine (lead author), Matija Franklin, Tan Zhi-Xuan, Secil Yanik Guyot, Lionel Wong, Daniel Kilov, Yejin Choi, Joshua B. Tenenbaum, Noah Goodman, Seth Lazar, Iason Gabriel · International Association for Safe and Ethical AI Conference (IASEAI), Paris 2026

Proposes Resource-Rational Contractualism (RRC) as an alignment framework: AI systems approximate the agreements rational parties would form using normatively-grounded, cognitively-inspired heuristics that trade effort for accuracy. An RRC-aligned agent operates efficiently while adapting dynamically to the changing human social world.

View source →
2026 Discerning What Matters: A Multi-Dimensional Assessment of Moral Competence in LLMs Lazar, Daniel Kilov, Caroline Hendy, Secil Yanik Guyot · and Aaron J. Snoswell IASEAI Paris 2026 (also accepted to Foundations of Reasoning in Language Models Workshop, NeurIPS 2025)

Identifies three shortcomings in existing LLM moral competence evaluations: over-reliance on prepackaged scenarios, focus on verdict prediction over reasoning, and inadequate testing of when models recognise they need more information. Introduces a five-dimensional assessment grounded in philosophical research on moral skill, covering perception, reasoning, self-knowledge, action, and reflection.

View source →
2026 Beyond Verdicts: Evaluating Language Model Moral Competence Lazar, Aaron J. Snoswell, Daniel Kilov, Association for the Advancement of Artificial Intelligence Conference (AI Alignment Track)

Proposes a richer framework for evaluating ethical competence in LLMs that goes beyond binary verdict accuracy to assess the quality of moral reasoning.

View source →
2025 Military AI Cyber Agents (MAICAs) Constitute a Global Threat to Critical Infrastructure Lazar, Tim Dubber (lead author) (accepted to Regulatable ML Workshop · NeurIPS 2025)

Autonomous AI cyber-weapons -- Military-AI Cyber Agents (MAICAs) -- create a credible pathway to catastrophic risk, given their technical feasibility and the geopolitical dynamics of cyberspace. Political, defensive-AI, and analogue-resilience measures are proposed to blunt the threat.

View source →
2025 The AI Power Disparity Index: Toward a Compound Measure of AI Actors Seth Lazar · Power to Shape the AI Ecosystem', with Rachel Kim and Blaine Kuehnert (lead authors), Ranjit Singh, and Hoda Heidari Knight First Amendment Institute,  September 2025 (also published in ACM/AAAI AI, Ethics and Society Conference, 2025)

Develops a compound measure of how AI actors' power shapes the AI ecosystem, providing an empirical tool for tracking concentration and disparity.

View source →
2025 On the Moral Case for Using Language Model Agents for Recommendation Lazar, Luke Thorburn, Tian Jin,, Luca Belli · Inquiry

Makes a moral case for deploying language model agents in recommendation systems, arguing they could better serve user interests than current algorithmic approaches.

View source →
2025 AI Agents and Democratic Resilience Lazar, Mariano-Florentino Cuéllar · Knight First Amendment Institute, September 2025

Examines how AI agents may affect democratic institutions and proposes strategies for ensuring democratic resilience in the face of increasingly autonomous AI systems.

View source →
2025 Anticipatory AI Ethics Seth Lazar · Knight First Amendment Institute, April 2025

Argues for an anticipatory approach to AI ethics that addresses the societal impacts of emerging AI capabilities before they are widely deployed.

View source →
2025 Infrastructure for AI Agents Alan Chan (lead author), Kevin Wei, Sihao Huang, Nitarshan Rajkumar, Elija Perrier, Seth Lazar, Gillian K Hadfield, Markus Anderljung · Transactions of Machine Learning Research

Argues that making AI agents safe and useful requires not just training-time behavioural modifications but external infrastructure -- including efficient agent-to-agent communication, attribution of actions to legal entities, and mechanisms for trust and accountability. Proposes an infrastructure-centric research agenda covering identification, data access, metering, and safety standards.

View source →
2025 Position: Build Agent Advocates, not Platform Agents Lazar, Sayash Kapoor, Noam Kolt · International Conference on Machine Learning

Warns that if dominant internet platforms control language model agents, the resulting 'platform agents' will deepen surveillance, tighten lock-in, and entrench incumbents. Advocates instead for user-controlled 'agent advocates,' requiring broad public access to compute and models, open interoperability standards, and market regulation to prevent platform foreclosure.

View source →
2025 Using LLMs to Enhance Democracy Lazar, Lorenzo Manuali · ACM Fairness, Accountability, and Transparency Conference (non-archival)

Assesses whether LLMs used for summarising deliberative content, aggregating opinion, and predicting voter preferences genuinely advance democratic values or merely simulate democratic processes. Argues that optimism outpaces evidence, and that careful normative analysis is needed before deploying LLMs in democratic institutions.

View source →
2025 Governing the Algorithmic City Seth Lazar · Philosophy & Public Affairs, 53/2, 102-168

Drawing on Dewey, argues that computing technologies have transformed political association analogously to industrialisation, and develops a framework for governing the resulting algorithmic city.

View source →
2024 Position: On the Societal Impact of Open Foundation Models Lazar, lead authors Rishi Bommasani, Sayash Kapoor, et al · International Conference on Machine Learning

Identifies five distinctive properties of open foundation models -- including greater customizability and poor monitoring -- that drive both their benefits and risks. Finds that current open models pose limited marginal risk of misuse beyond existing tools, but warns this calculus may shift as capabilities improve.

View source →
2024 Attention, Moral Skill, and Algorithmic Recommendation Lazar, Nick Schuster · Philosophical Studies, 182/1, 159-184

Analyses how algorithmic recommendation systems affect the development and exercise of moral skill, focusing on the role of attention in ethical perception.

View source →
2024 On the Site of Predictive Justice Lazar, Jake Stone, Noûs, 58/3, 730-754

Examines where justice demands intervene in predictive systems, analysing the appropriate locus of moral evaluation for algorithmic prediction.

View source →
2024 Legitimacy, Authority, and Democratic Duties of Explanation Seth Lazar · Oxford Studies in Political Philosophy, Volume 10, 28-56

Develops an account of democratic duties of explanation grounded in political legitimacy and authority, with implications for AI-driven decision-making.

View source →
2024 Frontier AI Ethics Seth Lazar · Aeon and

Focuses on what makes generative AI systems genuinely distinctive rather than rehashing familiar AI pathologies or leaping to existential risk: their scientific achievement and the most consequential societal changes they will bring over the next decade. Explores the normative questions raised by language model agents that form the core of autonomous AI systems acting in the world.

View source →
2024 Can Democracy Survive Artificial General Intelligence? Lazar, Alex Pascal · Tech Policy Press
2024 Automatic Authorities: Power and AI Seth Lazar · Collaborative Intelligence: How Humans and AI are Transforming our World, Arathi Sethumadhavan and Mira Lane (eds.), Cambridge: MIT Press

Examines how ML and computational technologies create new forms of power over individuals, as corporations and diminished states exercise authority through automated systems.

View source →
2023 Communicative Justice and the Distribution of Attention Seth Lazar · Knight First Amendment Institute

Argues that algorithmic intermediaries govern the digital public sphere through architectures, amplification, and moderation, distributing attention in ways that raise questions of communicative justice.

View source →
2023 On the Site of Predictive Justice Lazar, Jake Stone · ACM Fairness, Accountability, and Transparency Conference (non-archival) (published in Noûs, above)

Examines where justice demands intervene in predictive systems, analysing the appropriate locus of moral evaluation for algorithmic prediction.

2023 AI Safety on Whose Terms? Lazar, Alondra Nelson · Science

Rapid, widespread adoption of the latest large language models has sparked both excitement and concern about advanced artificial intelligence (AI). In response, many are looking to the field of AI safety for answers. Major AI companies are purportedly investing heavily in this young research program, even as they cut "trust and safety" teams addressing harms from current systems. Governments are taking notice too. The United Kingdom just invested £100 million in a new "Foundation Model Taskforce" and plans an AI safety summit this year. And yet, as research priorities are being set, it is already clear that the prevailing technical agenda for AI safety is inadequate to address critical questions. Only a sociotechnical approach can truly limit current and potential dangers of advanced AI.

View source →
2022 Supererogation and Optimisation Lazar, Christian Barry · Australasian Journal of Philosophy, 102/1, 21-36

This paper examines three approaches to the relationship between our moral reasons to bear costs for others’ sake before and beyond the call of duty. Symmetry holds that you are required to optimise your beneficial sacrifices even when they are genuinely supererogatory. If you are required to bear a cost C for the sake of a benefit B, when they are the only costs and benefits at stake, you are also conditionally required to bear an additional cost C, for the sake of an additional benefit B, when enough other costs and benefits are at stake that both of your alternatives are presumptively supererogatory. Disconnection rejects the requirement to optimise when your options are presumptively supererogatory and maintains that you have an entirely free hand to choose as you will among them. Asymmetry holds that when acting beyond the call of duty you are entitled to a measure of additional freedom compared to when you are not taking on supererogatory costs—you can prioritise your own well-being and reasons to a greater degree—but places constraints on the options that you may permissibly choose. We defend a version of Asymmetry and explore its implications for recent debates on charitable giving.

View source →
2022 What Seth Lazar · s Wrong with Automated Influence', with Claire Benn, Canadian Journal of Philosophy, 52/1, 125-148

Examines three central objections to Automated Influence -- privacy, exploitation, and manipulation -- showing that structural versions of each objection carry more weight than their interactional counterparts. By moving from the interactional focus of 'AI Ethics' to political philosophy, reveals that Automated Influence's core problem is the crisis of legitimacy it precipitates.

View source →
2022 Power and AI Seth Lazar · The Oxford Handbook of AI Governance, Johannes Himmelreich et al. (eds.), New York: Oxford University Press

AI and related computational systems are being used by some to exercise power over others. This is especially clear in our online lives, which are increasingly structured and governed by computational systems using some of the most advanced techniques in AI. But it is also apparent in our offline lives, as computational systems using AI are used by powerful actors, including states, local government, and employers. Proponents of various principles of “AI Ethics” sometimes imply that the sole normative function of those principles is to ensure that AI is used to achieve socially acceptable goals. Drawing attention to the ways in which AI systems are used to exercise power demonstrates the inadequacy of this normative analysis. When new and intensified power relations develop, we must attend not only to what power is used for, but also to how and by whom it is used.

View source →
2022 Fostering Responsible Computing Research: Foundations and Practices, A Consensus Study Report of the National Academies of Sciences, Engineering Medicine, National Academies Press Seth Lazar · Lead Author Barbara Grosz (Harvard). Report commissioned by the NSF
2021 Deontological Decision Theory and Lesser-Evil Options Lazar, Peter A. Graham · Synthese, 198, 6889-6916

Many philosophers believe in lesser-evil justifications for doing harm: if the only way to stop a trolley from killing five is to divert it away onto one, then we may divert. But recently, Helen Frowe has argued that we do not only have the option to pursue the lesser evil: in most cases, we are so obligated. After critically assessing Frowe’s argument, I develop three mutually compatible accounts of lesser-evil options, which permit, but do not obligate us to minimize harm. These are the Parity Account, the Prerogative Account, and the Permissible Moral Mistakes Account. Considerations of parity and prerogatives have arisen in this debate before, but in inchoate form. The Permissible Moral Mistakes Account introduces something new.

View source →
2020 Duty and Doubt Seth Lazar · Journal of Practical Ethics, 8/1, 28-55

Deontologists have avoided decision-making under uncertainty because standard decision theory appears superficially consequentialist, but its central tenets pose no genuine obstacle. Maps the path toward a comprehensive deontological decision theory that respects the priority of the right over the good.

View source →
2020 Should I Use that Rating Factor? A Philosophical Approach to an Old Problem Lazar, Chris Dolman (lead author), Tiberio Caetano,, Dimitri Semenovich · 2020 All Actuaries Summit
2019 Deontological Decision Theory and the Grounds of Subjective Permissibility Seth Lazar · Oxford Studies in Normative Ethics, 9

If we had perfect information, then we could say, for any given objectively permissible act, what makes it objectively permissible. But when we have imperfect information, when we must decide under risk and uncertainty, what then makes an act <italic>subjectively</italic> permissible or impermissible? There are two salient possibilities. The first is the “verdicts” approach. It grounds judgments of subjective permissibility in probabilistically discounted judgments of objective permissibility. The principle “minimize expected objective wrongness” takes this approach. The second is the “reasons” approach. It grounds subjective permissibility in probabilistically discounted objective reasons. “Maximize expected utility” is one example. Chapter 10 considers whether the verdicts approach or the reasons approach to grounding judgments of subjective permissibility is better suited for <italic>deontological</italic> decision-making with imperfect information. Perhaps surprisingly, the reasons approach comes out on top.

View source →
2019 Self-Ownership and Agent-Centred Options Seth Lazar · Social Philosophy and Policy, 36/2, 36-50

:I argue that agent-centered options to favor and sacrifice one’s own interests are grounded in a particular aspect of self-ownership. Because you own your interests, you are entitled to a say over how they are used. That is, whether those interests count for or against some action is, at least in part, to be determined by your choice. This is not the only plausible argument for agent-centered options. But it has some virtues that other arguments lack.

View source →
2019 Accommodating Options Seth Lazar · Pacific Philosophical Quarterly, 100/1, 233-255

Many of us think we have agent‐centred options to act suboptimally. Some of these involve favouring our own interests. Others involve sacrificing them. In this paper, I explore three different ways to accommodate agent‐centred options in a criterion of objective permissibility. I argue against satisficing and rational pluralism and in favour of a principle built around sensitivity to personal cost.

View source →
2019 Moral Status and Agent-Centred Options Seth Lazar · Utilitas, 31/1, 83-105

If we were required to sacrifice our own interests whenever doing so was best overall, or prohibited from doing so unless it was optimal, then we would be mere sites for the realization of value. Our interests, not ourselves, would wholly determine what we ought to do. We are not mere sites for the realization of value – instead we, ourselves, matter unconditionally. So we have options to act suboptimally. These options have limits, grounded in the very same considerations. Though not merely such sites, you and I are also sites for the realization of value, and our interests (and ourselves) must therefore sometimes determine what others ought to do, in particular requiring them to bear reasonable costs for our sake. Likewise, just as my moral status grounds a requirement that others show me appropriate respect, so must I do to myself.

View source →
2019 Axiological Absolutism and Risk Lazar, Chad Lee-Stronach, Noûs, 53/1, 97-113

Consider the following claim: given the choice between saving a life and preventing any number of people from temporarily experiencing a mild headache, you should always save the life. Many moral theorists accept this claim. In doing so, they commit themselves to some form of ‘moral absolutism’: the view that there are some moral considerations (like being able to save a life) that cannot be outweighed by any number of lesser moral considerations (like being able to avert a mild headache). In contexts of certainty, it is clear what moral absolutism requires of you. However, what does it require of you when deciding under risk? What ought you to do when there is a chance that, say, you will not succeed in saving the life? In recent years, various critics have argued that moral absolutism cannot satisfactorily deal with risk and should, therefore, be abandoned. In this paper, we show that moral absolutism can answer its critics by drawing on—of all things—orthodox expected utility theory.

View source →
2018 Limited Aggregation and Risk Seth Lazar · Philosophy & Public Affairs, 46/2, 117-159

Deontological ethical theory and unqualified absolutism were once near synonyms: 'do justice though the heavens fall'. Few deontologists today are so hard-line. But many still believe that some trade-offs that would yield unambiguously better outcomes are nonetheless wrong. Here is one such scenario-call it Life for Headaches: We must choose either to avert a minor, temporary headache for each member of a multitude or to save one person's life. 2 No matter how numerous the multitude, we ought always to save the one.

View source →
2018 Moral Sunk Costs Seth Lazar · Philosophical Quarterly, 68/273, 841-861

The problem of moral sunk costs pervades decision-making with respect to war. In the terms of just war theory, it may seem that incurring a large moral cost results in permissiveness: if a just goal may be reached at a small cost beyond that which was deemed proportionate at the outset of war, how can it be reasonable to require cessation? On this view, moral costs already expended could have major implications for the ethics of conflict termination. Discussion of sunk costs in moral theorizing about war has settled into four camps: Quota, Prospect, Addition, and Discount. In this paper, I offer a mathematical model that articulates each of these views. The purpose of the mathematisation is threefold. First, to unify the sunk costs problem. Second, to show that these views differ in the nature of their justifications: some are justified qualitatively and others quantitatively. Third, to clarify the differential force of qualitative and quantitative critiques of these four views.

View source →
2018 Strengthening Moral Distinction Seth Lazar · (response to symposium on Sparing Civilians), Law and Philosophy, 37/3, 327-349
2018 In Dubious Battle: Uncertainty and the Ethics of Killing Seth Lazar · Philosophical Studies, 175/4, 859-883

Surveys two approaches to deontological decision-making under uncertainty about killing: using double-effect reasoning directly as a guide to subjective permissibility, and fitting deontological ethics into decision-theoretic apparatus. Shows the first approach faces insurmountable obstacles while the second holds much more promise than deontologists or their critics have assumed.

View source →
2018 The Ethics of War Seth Lazar · (with Helen Frowe) The Oxford Handbook of Ethics of War, Seth Lazar and Helen Frowe (eds.), New York: Oxford University Press

It is important to reflect on the way we evaluate the laws and customs of armed conflict and the responsibilities we take on when we criticize and propose possible changes to them. These laws are not robust, and there is a danger that criticism may undermine their force while not providing effective alternatives. Moreover, in the area of armed conflict, it is easy to underestimate the pressures that a satisfactory set of norms has to respond to and easy to exaggerate the “merely” conventional character of such norms. Laws of war must be administrable in circumstances of fear, confusion, and violence and must include elements of technicality difficult to understand in philosophical terms. One of the most influential of recent laws of war revisionists, Jeff McMahan, acknowledges that his deep moral critique of existing norms of armed conflict does not necessarily yield a set of prescriptions for legal reform. This chapter extends McMahan’s and counsels the utmost caution in these critiques and re-examinations.

View source →
2018 Method in the Morality of War Seth Lazar · The Oxford Handbook of Ethics of War, Seth Lazar and Helen Frowe (eds.), New York: Oxford University Press

This chapter introduces the two main ways to think about the ethics of war. The first is to start by thinking about war. The second is to think about the ethics of killing outside of war, then apply those principles to the case of war. In contemporary just war theory, the first approach has most commonly been associated with those who broadly aim to vindicate international law, such as Michael Walzer and his contemporary defenders. The second approach is more frequently linked to the work of Jeff McMahan, and Walzer’s other revisionist critics. I show that this conflation is mere accident. Indeed, perhaps the richest terrain to be ploughed is in the combinations that have been relatively neglected—vindications of international law that start from cases based outside of war; critiques of international law based on the distinctive nature of war.

View source →
2017 Risky Killing: How Risks Worsen Violations of Objective Rights Seth Lazar · Journal of Moral Philosophy, 16/1, 1-26
2017 Deontological Decision Theory and Agent-Centred Options Seth Lazar · Ethics, 127/3, 579-609

Deontologists have long been upbraided for lacking an account of justified decision-making under risk and uncertainty. One response is to develop a deontological decision theory—a set of necessary and sufficient conditions for an act’s being permissible given an agent’s imperfect information. In this article, I show that deontologists can make more use of regular decision theory than some might have thought, but that we must adapt decision theory to accommodate agent-centered options—permissions to favor or sacrifice our own interests, when doing so is overall morally worse. Accommodating options requires more than just amending the decision-theoretic ‘value function’. We must change the decision rule as well.

View source →
2017 Response: Limiting Defensive Rights Seth Lazar · Journal of Applied Philosophy, 34/1, 19-23
2017 Proxy Battles in Just War Theory: Jus in Bello, the Site of Justice, and Feasibility Constraints Lazar, Laura Valentini · Oxford Studies in Political Philosophy, volume 3

Interest in just war theory has boomed in recent years, as a revisionist school of thought has challenged the orthodoxy of international law, most famously defended by Michael Walzer [1977]. These revisionist critics have targeted the two central principles governing the conduct of war (jus in bello): combatant equality and noncombatant immunity. The first states that combatants face the same permissions and constraints whether their cause is just or unjust. The second protects noncombatants from intentional attack. In response to these critics, some philosophers have defended aspects of the old orthodoxy on novel grounds. Revisionists counter. As things stand, the prospects for progress are remote. In this paper, we offer a way forward. We argue that exclusive focus on first-order moral principles, such as combatant equality and noncombatant immunity, has led revisionist and orthodox just war theorists to engage in “proxy battles.” Their first-order moral disagreements are at least partly traceable to second-order disagreements about the nature and purpose of political theory. These deeper disputes have been central to the broader discipline of political theory for several years; we hope that bringing them to bear on the ethics of war will help us move beyond the present impasse.

View source →
2017 Evaluating the Revisionist Critique of Just War Theory Seth Lazar · Daedalus, 146/1, 113-124

Situates the revisionist critique of Walzer's just war principles -- noncombatant immunity, proportionality, and combatant equality -- and illustrates the disturbing implications that follow from revisionist premises. Concludes that while revisionist arguments are powerful, alternative foundations for key tenets of international humanitarian law remain available.

View source →
2017 Just War Theory: Revisionists Vs Traditionalists Seth Lazar · Annual Review of Political Science, 20, 37-54

Maps the contemporary divide between revisionist and traditionalist just war theory, from Walzer's foundational defence of international humanitarian law through McMahan's revisionist critique to the subsequent traditionalist revival. Shows that although revisionism identifies genuine moral problems, traditionalists can accommodate its strongest insights without abandoning the laws of armed conflict's moral foundations.

View source →
2017 Anton Seth Lazar · s Game: Deontological Decision Theory for an Iterated Decision Problem', Utilitas, 29/1, 88-109

How should deontologists approach decision-making under uncertainty, for an iterated decision problem? In this article I explore the shortcomings of a simple expected value approach, using a novel example to raise questions about attitudes to risk, the moral significance of tiny probabilities, the independent moral reasons against imposing risks, the morality of sunk costs, and the role of agent-relativity in iterated decision problems.

View source →
2016 Complicity, Collectives, and Killing in War Seth Lazar · Law and Philosophy, 35/4, 365-389

Rejects the argument that unjust combatants are liable to be killed solely through complicity in their side's wrongful war, and that noncombatants escape targeting because they lack complicity. Argues instead that just combatants have reason to direct force against unjust combatants out of respect for the reasonable self-determining decisions of other political communities.

View source →
2016 The Justification of Associative Duties Seth Lazar · Journal of Moral Philosophy, 13/1, 28-55

People often think that their special relationships with family, friends, comrades and compatriots, can ground moral reasons. Among these reasons, they understand some to be duties – pro tanto requirements that have genuine weight when they conflict with other considerations. In this paper I ask: what is the underlying moral structure of associative duties? I first consider and reject the orthodox Teleological Welfarist account, which first observes that special relationships are fundamental for human well-being, then claims that we cannot have these relationships, if we do not recognise associative duties, before concluding that we should therefore recognise associative duties. I then introduce a nonteleological alternative, grounded in the Appropriate Response approach to ethical theory.

View source →
2016 Authorization and the Morality of War Seth Lazar · Australasian Journal of Philosophy, 94/2, 211-226

Why does it matter that those who fight wars be authorized by the communities on whose behalf they claim to fight? I argue that lacking authorization generates a moral cost, which counts against a war's proportionality, and that having authorization allows the transfer of reasons from the members of the community to those who fight, which makes the war more likely to be proportionate. If democratic states are better able than non-democratic states and sub-state groups to gain their community's authorization, this means that some wars will be proportionate if fought by democracies, disproportionate if not.

View source →
2016 War Seth Lazar · Stanford Encyclopedia of Philosophy, Zalta (ed.), Spring 2016 edition
2016 War Seth Lazar · s Endings and the Structure of Just War Theory', The Ethics of War, Sam Rickless and Saba Bazargan (eds.), New York: Oxford University Press
2016 Travel, Friends, and Killing Seth Lazar · in Philosophers Take on the World, Edmonds (ed.), Oxford: Oxford University Press, 25-27
2016 The Associativist Account of the Ethics of War Seth Lazar · Global Political Theory, David Held and Pietro Maffettone (eds.), Cambridge: Polity, 158-179

Grounds part of the justification for killing in war in the associative duties we owe to loved ones -- to protect them from the severe harms war threatens. Shows how these duties can override the rights to life of those who must be killed, and how soldiers fighting on behalf of their community can act on reasons that apply to its members.

View source →
2016 Liability and the Ethics of War: A Reply to Strawser and McMahan Seth Lazar · The Ethics of Self-Defence, Coons and Weber (ed.), New York: Oxford University Press, 292-304
2015 Risky Killing and the Ethics of War Seth Lazar · Ethics, 126/1, 91-117

Killing civilians is worse than killing soldiers. Although this principle is widely affirmed, recent military practice and contemporary just war theory have undermined it. This article argues that killing an innocent person is worse the likelier it was, when you acted, that he would be innocent: riskier killings are worse than less risky killings. In war, killing innocent civilians is almost always riskier than killing innocent soldiers. So killing innocent civilians is worse than killing innocent soldiers. Since almost all civilians are innocent in war, and since killing innocent civilians is worse than killing liable soldiers, killing civilians is worse than killing soldiers.

View source →
2015 Authority, Oaths, Contracts, and Uncertainty in War Seth Lazar · Thought, 4/1, 52–58
2014 Necessity and Noncombatant Immunity Seth Lazar · Review of International Studies, 40/1, 53-76

The principle of noncombatant immunity prohibits warring parties from intentionally targeting noncombatants. I explicate the moral version of this view and its criticisms by reductive individualists; they argue that certain civilians on the unjust side are morally liable to be lethally targeted to forestall substantial contributions to that war. I then argue that reductivists are mistaken in thinking that causally contributing to an unjust war is a necessary condition for moral liability. Certain noncontributing civilians—notably, war-profiteers—can be morally liable to be lethally targeted. Thus, the principle of noncombatant immunity is mistaken as a moral (though not necessarily as a legal) doctrine, not just because some civilians contribute substantially, but because some unjustly enriched civilians culpably fail to discharge their restitutionary duties to those whose victimization made the unjust enrichment possible. Consequently, the moral criterion for lethal liability in war is even broader than reductive individualists have argued.

View source →
2014 National Defence, Self-Defence, and the Problem of Political Aggression Seth Lazar · in The Morality of Defensive War, Lazar and Fabre (eds.), Oxford: Oxford University Press, 9-37

This chapter explains why one familiar and otherwise plausible approach to the justification of killing in war cannot adequately ground commonsense views of permissible national defence. Reductionists believe that justified warfare reduces to an aggregation of acts that are justified under ordinary principles of interpersonal morality. The standard form of reductionism focuses on the principles governing killing in ordinary life, specifically those that justify intentional killing in self- and other-defence, and unintended but foreseen (for short, collateral) killing as a lesser evil. Justified warfare, on this view, is no more than the coextension of multiple acts justified under these two principles. Reductionism is the default philosophical approach to thinking through the ethics of killing in war. It makes perfect sense to ask what principles govern permissible killing in general, before applying them to the particular context of war.

View source →
2013 Associative Duties and the Ethics of Killing in War Seth Lazar · Journal of Practical Ethics, 1/1, 3-48

Grounds part of the justification for killing in war in the associative duties we owe to loved ones -- to protect them from the severe harms war threatens. Shows how these duties can override the rights to life of those who must be killed, and how soldiers fighting on behalf of their community can act on reasons that apply to its members.

View source →
2013 War Seth Lazar · International Encyclopaedia of Ethics, Lafollette (ed.), Wiley-Blackwell
2013 Just War Theory Seth Lazar · Oxford Companion to Comparative Politics, Joel Krieger (ed.), Oxford University Press

War unleashes deadly violence and destruction and inverts the normal codes and conventions of society: killing, maiming, and injuring become justifiable; damage to land, resources, and buildings a normal occurrence; and the seizure of people and property a regularity. Following a century that has accustomed us to think about war in terms of totalitarian effects and total consequences, of genocidal campaigns, of the obliteration of cities and the threat of mutually assured destruction with atomic weapons, it may seem an arcane proclamation that war's reach and vehemence ought to be restrained. Yet that is what the just war conventions seek to explore and to uphold as guides to human action. While the pacifist rejects war absolutely and thereby contends that all acts within war are immoral and inexcusable and the militarist declares that in war (as in love) all is fair (Holmes, 1989), the just war theorist takes a middle position, acknowledging that should war break out, a range of considerations concerning its justification and the procedures that soldiers follow ought to be maintained as guides to ensure, in effect, that some sanctuaries from war's evils are acknowledged (Clark, 1988; Norman, 1995; Walzer, 1977).

2012 Necessity in Self-Defense and War Seth Lazar · Philosophy & Public Affairs, 40/1, 3-44

Mainstream moral beliefs about war seem to be inconsistent with mainstream moral beliefs about self-defense such as the imminence requirement, the requirement to retreat, and restrictions on responses to conditional threats. The chapter argues that these apparent inconsistencies are actually the result of the necessity principle applied to environments with different nonmoral social facts. War takes place in the anarchy of international relations, where a lack of effective cosmopolitan security institutions makes it necessary to confront nonimminent threats, stand one’s ground, and respond to conditional threats in order to defeat and deter aggression. Self-defense, on the other hand, usually occurs in societies with effective police. In such an environment, it is unnecessary for private citizens to take similar actions to defeat and deter aggression. This is a good reason to believe that mainstream moral beliefs about war are closer to the true morality of war than recent critics claim.

View source →
2012 Scepticism about Jus Post Bellum Seth Lazar · Morality, Jus Post Bellum, and International Law Larry May, Andrew Forcehimes (eds.), Cambridge: CUP, 204-22

The successful transition from armed conflict to peace is one of the greatest challenges of contemporary warfare. The laws and principles governing transitions from conflict to peace (jus post bellum) have only recently gained attention in legal scholarship. This volume investigates questions concerning the core of jus post bellum: the law (“jus”), the temporal aspect (“post”), and different types of armed conflict (“bellum”). It is the first volume to clarify the different legal meanings and components of the concept, including its implications in contemporary politics and practice. It explores the nature of jus post bellum as a concept, including its foundations, criticisms, and relationship to related concepts (e.g. Transitional Justice, Responsibility to Protect). It rethinks the nexus of the concept to jus ad bellum and jus in bello and its relevance in internal armed conflicts and peacebuilding. It examines problems in relation to the ending of conflict, including indicators for the end of conflict, exit strategies, and institutional responses. It also identifies contours of a “jus,” drawing on disparate bodies and sources of international law such as peace agreements, treaty law, self-determination, norms governing peace operations, and the status of foreign armed forces, environmental law, human rights, and amnesty law. Taking into account perspectives from multiple disciplines, the book will be relevant to scholars, practitioners, and students across many fields, such as peace and conflict studies, international relations, philosophy, political science, and international law.

View source →
2012 The Morality and Law of War Seth Lazar · Routledge Companion to Philosophy of Law, Andrei Marmor (ed.), London: Routledge, 364-79

The just war theory provides an account of rights and duties in war. At its heart lies a distinction between two sets of rules. The jus ad bellum provides a set of conditions under which going to war is deemed just. The jus in bello provides a set of conditions for just conduct in war. This chapter discusses the three challenges to just war theory and the responses to the three challenges.

View source →
2010 A Liberal Defence of (Some) Duties to Compatriots Seth Lazar · Journal of Applied Philosophy, 27/3, 246-257

This paper asks whether we can defend associative duties to our compatriots that are grounded solely in the relationship of liberal co‐citizenship. The sort of duties that are especially salient to this relationship are duties of justice, duties to protect and improve the institutions that constitute that relationship, and a duty to favour the interests of compatriots over those of foreigners. Critics have argued that the liberal conception of citizenship is too insubstantial to sustain these duties — indeed, that it gives us little reason to treat compatriots any differently from how we treat foreigners, with all the practical consequences that this would entail. I suggest that on a specific conception of liberal citizenship we can, in fact, defend associative duties, but that these extend only to the duty to protect and improve the institutions that constitute that relationship. Duties of justice and favouritism, I maintain, cannot be particularised to one's compatriots.

View source →
2010 The Responsibility Dilemma for Killing in War: A Review Essay Seth Lazar · Philosophy & Public Affairs, 38/2, 180-213

On one popular conception of how to do political theory, we should start with our considered judgments, try to work them together into a coherent theory, and then test our judgments against the theory, and the theory against the judgments, to see if either needs modification. 1 Philosophical discussion of the ethics of war has taken exactly this form: there are certain considered judgments, best enunciated by Michael Walzer, to which many hold. 2 Only combatants may be intentionally targeted in war; unintended harms to noncombatants must be minimized; wars of national defense and humanitarian intervention can be justified. Then there is the theory. Walzer's own loose attempt to synthesize these judgments has been largely discredited. 3 In recent years, philosophers from a more austere ethical tradition have argued that these theoretical failings demand reevaluation of the considered

View source →
2009 Responsibility, Risk, and Killing in Self-Defense Seth Lazar · Ethics, 119/4, 699-728

In this chapter the main contemporary theories of justifiable killing in self-defense are criticized and an alternative account elaborated: the fault-based internalist suspendable-rights theory (FIST). FIST is a partialist account. The rights not to be killed are such that when one member of the set of rights is suspended, the other rights remain in force. Thus, if A’s right not to be killed by B is suspended, then B no longer has an obligation not to kill A. However, A still has a right not to be killed by C, and thus C’s obligation not to kill A remains in force. In addition, on FIST, a culpable attacker suspends his or her right not to be killed by a defender even in cases in which it is not necessary (necessity condition) for the defender to kill the attacker to save his or her own life.

View source →
2009 Debate: Do Associative Duties Really Not Matter? Seth Lazar · Journal of Political Philosophy, 17/1, 90-101

Associative duties are the special obligations people are commonly thought to have toward certain associates, such as siblings, friends, colleagues, teammates, compatriots, and co‐nationals. While some doubt that associative duties are genuine, many take their existence to be necessary for explaining obligations which, upon reflection, appear difficult to ground in any other source.

View source →
2009 The Nature and Disvalue of Injury Seth Lazar · Res Publica, 15/3, 289-304
2008 Corrective Justice and the Possibility of Rectification Seth Lazar · Ethical Theory and Moral Practice, 11/4, 355–68
06

cat cv/other-writing.md

Other Writing and Interviews
21 items
2024 'Seth Lazar: Normative Philosophy of Computing' The Gradient, podcast interview, link
2024 'The Rise and Fall (and Rise Again) of the First AI Agent Millionaire', Tech Policy Press, link
2024 'Can we really trust AI to channel the public's voice to ministers?' The Guardian, link
2024 'Seth Lazar on Legitimate Power, Moral Nuance, and the Political Philosophy of AI', Generally Intelligent, podcast interview, link
2023 'The US is racing ahead in its bid to control artificial intelligence – why is the EU so far behind?', The Guardian, link
2023 'Model alignment protects against accidental harms, not intentional ones', with Arvind Narayanan and Sayash Kapoor, AI Snake Oil, link
2023 'Political Philosophy in the Age of AI', Philosophy Bites Interview, link
2023 'AI: Is it Out of Control? , Science Vs Interview, link
2023 'Is Avoiding Extinction from AI Really an Urgent Priority?' with Arvind Narayanan and Jeremy Howard, AI Snake Oil, link
2023 'Machines and Morality', New York Times, link
2020 'Large-Scale Facial Recognition is Incompatible with a Free Society', with Claire Benn and Mario Günther, The Conversation, link
2020 2020 'Contact tracing apps are vital tools in the fight against coronavirus. But who decides how they work?', with Meru Sheel, The Conversation, link
2020 'We're Sleepwalking into a World of Mass Surveillance', Barron's, link
2019 AI and moral intuition: use it or lose it? ABC Philosopher's Zone interview, link
2019 Interviewed on Episodes 2 and 4 of Series 3 of HiPhi Nation, approx. 100,000 downloads
2018 'Why we need more than just data to create ethical driverless cars' (with Colin Klein), The Conversation, link
2016 'Should Civilians be Spared?' Examining Ethics, podcast, link
2014 On Human Shields, Boston Review, link
2014 'Sparing Civilians in War' Philosophy Bites, interview by Nigel Warburton and David Edmonds. Released 19/7/2014, link
2013 The Moral Responsibility of Volunteer Soldiers: Response to McMahan, Boston Review, link
2012 'Seth Lazar on Self-Defense in War' Public Ethics Radio, Christian Barry (ed.), Carnegie Council for Ethics in International Affairs, link
07

cat cv/grants.md

Grants
13 grants
2024 A Conceptual and Practical Toolkit for Sociotechnical AI Safety, Center for Security and Emerging Technology, USD500,000 Sole CI. Two-year project, deferred to start when I arrive at Johns Hopkins.
2024 Language Model Agents and Society Project, Templeton World Charity Foundation, USD999,886 Sole CI. Three-year project on anticipating and steering the societal impacts of Language Model Agents.
2024 Support for Machine Intelligence and Normative Theory Lab, Survival and Flourishing DAF, USD 480,000 Sole CI. Unrestricted funding to support work on sociotechnical AI safety.
2024 'Implementing a "Moral Conscience" for LLM Agents', OpenAI Agents Research Grant — USD50,000 Lead CI, with Dylan Hadfield-Menell, Daniel Kilov and Aaron Snoswell.
2023 'Normative Philosophy of Computing' field building grant. AI2050 — USD50,000 Sole CI. Grant to build up the field of normative philosophy of computing.
2022 'Social Ontology of Large Language Models'. Google Research Unrestricted Gift — USD10,000 Small grant to support work on societal impacts of LLMs.
2022 'Socially Responsible Insurance in the Age of AI'. ARC Linkage Projects (LP21) Award — AUD495,000 with AUD350,000 funding from IAG and AUD100,000 from ANU Lead CI, with Damian Clifford, Jenny Davis, Kimberlee Weatherall, Tiberio Caetano and Chris Dolman. A collaboration with the Gradient Institute and Insurance Australia Group to establish how AI can be used to help realise the social function of insurance while mitigating risks due to discrimination, unaccountable power, and privacy.
2021 'Automatic Authorities: Charting a Course for Legitimate AI'. ARC Future Fellowship (FT21) Award — AUD1,020,698 Sole CI. Project commenced April 2022. This project aims to develop novel theories of power and its justification, and apply them to the use of AI by state and non-state actors to exercise power by means of AI.a
2019 'Moral Skill and Artificial Intelligence'. Templeton World Charity Foundation 'Diverse Intelligences' Grant — USD234,000 Project Director. With Co-Director Claire Benn, and CIs Jenny Davis, Toni Erskine, Colin Klein. This project will ask whether outsourcing morally weighted decisions to automated systems can lead to 'moral deskilling', and whether there are ways to design automated systems so that they make us better, not worse, at exercising moral judgment.
2016 'Ethics and Risk'. ARC Discovery Project Award (DP17) — AUD335,000 Lead CI. Other CIs and PIs: Lara Buchak (Berkeley); Katie Steele (ANU); Alan Hájek (ANU); Frank Jackson (ANU); Philip Pettit (ANU).
2013 'Justifying War'. ARC Discovery Early Career Research Award (DE13) — AUD366,000 Sole CI.
2005 Arts and Humanities Research Council, Doctoral Fellowship — GBP60,000
2019 'Humanising Machine Intelligence'. ANU Grand Challenge Program (AUD5.5m) Founding Lead (stepped down to take up future fellowship). Other team members: Damian Clifford, Jenny Davis, Toni Erskine, Colin Klein, Hanna Kurniawati, Sarah Logan, Katie Steele, Sylvie Thiébaux, Lexing Xie. Multidisciplinary project on the morality, law and politics of data and AI. For information see hmi.anu.edu.au
08

cat cv/events.md

Conferences
2 events
2022 ACM Fairness, Accountability and Transparency Conference General co-chair (one of four) for the top CS and interdisciplinary conference for AI ethics.
2021 AAAI/ACM AI, Ethics and Society Conference Program and General co-chair (one of four) for one of the two top CS and interdisciplinary conferences for AI ethics. Also convened 'Platform Power and AI' panel discussion.
Workshops Organised
16 events
2026 Workshop on Normative Competence, IASEAI 2026, Paris.
2026 AGI-Ready Institution Design Workshop, IASEAI 2026, Paris.
2025 Sociotechnical AI Safety Retreat at Kioloa Coastal Campus; Knight 1A Symposium on AI and Democratic Freedoms.
2024 Normative Philosophy of Computing Workshop at Kioloa Coastal Campus; AI and Catastrophic Risk at ANU; Sociotechnical AI Safety Workshop at ITS Rio; Political Philosophy and AI at Kioloa; Normative Philosophy of Computing at Yale; Knight 1A Symposium on AI and Democratic Freedoms.
2023 Philosophy, AI and Society Workshop and Fair Machine Learning Authors meet Critics Workshop at Stanford; Philosophy AI and Society Doctoral Colloquium at Oxford; Sociotechnical AI Safety Workshop at Stanford; Democracy and AI Workshop at Carnegie Endowment for International Peace.
2022 Philosophy, AI and Society workshop at Harvard
2019 'Ethics and AI'; 'Decision Theory and AI'; FM Kamm Masterclass; Stanford HAI Philosophy, AI, and Society panel and workshop at Stanford
2018 'Ethics and Risk'; 'Foundations of Normative Ethics'
2017 'On the work of Marc Fleurbaey'; 'Awesome Workshop in Normative Ethics'
2016 'Awesome Workshop in Normative Ethics and Political Philosophy'; PPE Masterclass; Dale Dorsey Masterclass; Jake Ross Masterclass; Wlodek Rabinowicz Masterclass
2015 'Ethics and Decision Theory'
2014 'Feasibility and the Ethics of War' (with Nic Southwood); AAP Stream on Ethics of Force; Christian List Masterclass
2013 Honours/Masters Workshop; Legitimacy and Authority; Niko Kolodny Masterclass; Enoch/Jackson/Smith Masterclass; Tom Dougherty Masterclass
2011 'War and Global Justice', IAS, Hebrew University of Jerusalem
2010 'Why We Fight: The Purposes of Military Force in the Twenty-First Century', second meeting of the ELAC Workshop, Oxford; 'Eliminative and Manipulative Agency in the Ethics of Self-Defence', ELAC, Oxford
2009 'Killing in War workshop', first meeting of the ELAC Workshop, Oxford
Public Lectures Organised
6 events
2022 Jamie Susskind launching the Machine Intelligence and Normative Theory (MINT) Lab, https://philosophy.cass.anu.edu.au/index.php/news/jamie-susskind-lecture
2019 Jack Smart lecture by FM Kamm; HMI Lectures by Walter Sinnott-Armstrong, Shannon Vallor, David Danks, Kate Crawford
2018 Jack Smart lecture by Michael Smith; Philosophy and Public Policy Lecture by Huw Price
2017 Jack Smart lecture by Peter Godfrey-Smith; Launch of Centre for Philosophy of the Sciences, public lecture by Peter Godfrey-Smith; Passmore Lecture by Marc Fleurbaey; Philosophy and Public Policy Lecture by Peter Singer
2016 Philosophy and Public Policy Lecture by Leif Wenar; Passmore Lecture by Liz Anderson
2014 Passmore Lecture by Jeff McMahan
09

cat cv/presentations.md

Named Lectures
4 talks
'On AI Personhood Without Sentience', Arthur and Barbara Gianelli Annual Lecture, St John's University, 2025
'What, if anything, should we do, now, about catastrophic AI risk?'. Scholl Lecture, Purdue University, 2024.
'Algorithmic Governance and Political Philosophy'. Tanner Lectures on AI and Human Values at the Human-Centered AI Institute, Stanford University, 2023. Recordings.
'The Nature and Justification of Algorithmic Power'. Mala and Solomon Kamm Lecture in Ethics at the Safra Centre for Ethics, Harvard University, 2022. Recording.
Keynotes and Festschrift
8 talks
'Themes from Seth Lazar'. Half-day workshop on my work at University of Hong Kong, 2025.
'Evaluating LLM Ethical Competence'. NeurIPS Workshop on Algorithmic Fairness through the Lens of Metrics and Evaluation, Vancouver, 2024. Recording.
'Philosophical Foundations for Pluralistic Alignment'. NeurIPS Workshop on Pluralistic Alignment, Vancouver 2024
'What, if anything, should we do, now, about catastrophic AI risk?'. European Workshop on Algorithmic Fairness, Mainz, 2024.
'Legitimacy, Authority, and the Political Value of Explanations'. Oxford Studies in Political Philosophy Conference, Arizona, 2022.
'Legitimacy, Authority, and the Political Value of Explanations'. Japan Association for Philosophy of Science Annual Meeting, Tokyo Institute of Technology, 2021.
'On Machine Ethics'. Keynote lecture at Ethics of Data Science Conference, University of Sydney, 2019
Keynote Lecture at Ethics of War in the 21st Century, University of Stockholm (2014).
Departmental Seminars and other Invited Talks
13 talks
2026 'Trustworthy AI,' University of Hong Kong.
2025 'On AI Personhood Without Sentience,' Jülich Speaker Series in the Philosophy of Technology (organised by Charles Rathkopf, Forschungszentrum Jülich / Uni Bonn), June 2025.
'Aligning Language Model Agents', Lingnan University AI Ethics Workshop.
'What, if anything, should we do, now, about catastrophic AI risk?' 2024: Hong Kong University, Sociotechnical AI Safety Workshop in Rio.
Aligning LLM Agents', 2024: OpenAI, Purdue University, Stanford School of Engineering.
'Governing the Algorithmic City' (previously, 'The Nature and Justification of Algorithmic Power'). 2023: Cal Poly; 2022: Cornell Tech Digital Life Institute, Princeton University Center for Human Values, Carnegie Mellon, Emory, Toronto Schwarz Reisman Institute for Technology.
'Communicative Justice and the Distribution of Attention'. Cornell, Notre Dame, Knight First Amendment Institute Symposium on Algorithmic Amplification, Columbia.
'Legitimacy, Authority, and the Political Value of Explanations'. 2022: Rutgers University; 2020: Diverse Intelligences Summit; 2019: ANU, MIT, Carnegie Mellon.
'On the Site of Predictive Justice'. 2023: University of Oxford Institute for Ethics in AI; 2022: Princeton Workshop in Normative Philosophy, Edmond J. Safra Center for Ethics, Harvard, ANU.
'What's Wrong with Automated Influence'. 2020: Edmond J. Safra Center for Ethics, Harvard.
'Machine Ethics: A Solution in Search of a Problem'. 2020: Carnegie Mellon; 2018: ANU.
'AI Ethics without Principles'. 2020: Brown Bag Lecture Series, World Bank, Washington DC (Cancelled due to COVID); 2019: NITRD Agency
Previous talks: Moral Philosophy Seminar, University of Oxford (2017, 2013); Moral Sciences Club, University of Cambridge (2017); National University of Singapore (2017); University of York (2017); University of Melbourne (2016, 2011); MIT (2015, 2019); UNC-Chapel Hill PPE Group (2015); Yale Moral Philosophy Working Group (2015, 2013); Kadish Center for Morality, Law and Public Affairs, UC Berkeley (2015); Monash University (2015); Arizona State University (2015); University of Southern California (2015, 2013); UNC-Greensboro (2015); Nathanson Centre for Human Rights, York University, Toronto (2014); CSMN Research Seminar, University of Oslo (2014); University of Toronto (2014, 2011); Program in Ethics and Public Affairs Seminar, Princeton (2014); Roskilde University, Copenhagen (2014); University of St Andrews (2013); Victoria University, Wellington (2013); Uehiro Centre for Practical Ethics, University of Oxford (2013); University of Stirling (2013); University of Auckland (2013); University of Otago (2013); University of Manchester (2013); University of Glasgow (2013); University of Christchurch (2013); Macquarie University (2013); University of Adelaide (2013); Stanford University (2013, 2011); University of Essex (2013); University of Warwick (2013, 2010); Rutgers University (2013); University of Leeds (2013); University of Colorado at Boulder Political Science Department (2013); Political Science, LSE (2013); Political Theory Seminar, University of Cambridge (2013); Political Theory Seminar, Cambridge (2013); Dartmouth Department of Government (2011); Washington University St Louis (2011); University of Chicago (2011); University of Toronto (2011); Nuffield Political Theory Workshop, University of Oxford (2011).
Conferences/Workshops
6 talks
2026 'Trustworthy AI: Norm-Responsiveness and Accountability,' UHK Workshop on Agents and Companions, 2026.
2026 'Discerning What Matters,' IASEAI Paris, 2026; NeurIPS 2025 (poster); EvalEval/UK AISI Workshop at NeurIPS, 2025; Far.ai Alignment Workshop, San Diego, 2025.
2026 'Resource Rational Contractualism Should Guide AI Alignment,' IASEAI Paris, 2026.
2026 Panel Discussion, IASEAI Paris, 2026.
2025 Panel Discussion on existential risk argument mapping, Centre for Human-Compatible AI (CHAI) Annual Workshop, 2025.
ACM FAccT Tutorial 2024; ACM FAccT Tutorial, 2021; Modelling Morality Workshop, Carnegie Mellon University (2020); Ethics and Uncertainty Conference, Center for Moral and Political Philosophy, Hebrew University of Jerusalem (2018); Ethics and Risk Workshop, ANU (2018); Social Philosophy and Policy workshop, London (2018); Oxford Studies in Normative Ethics Workshop, Tucson (2018); New Work in Political Philosophy workshop, Hebrew University of Jerusalem IAS (2016); Workshop on Ethics and Risk, University of Stockholm (2016); ICREA Public Health Workshop, Barcelona Pompeu-Fabra (2016); US Military Academy, West Point (2015); American Academy Conference, New Dilemmas in Ethics, Technology and War I, Stanford University (2015); American Academy Conference, New Dilemmas in Ethics, Technology and War II, USMA, West Point (2015); Oxford Studies in Political Philosophy Workshop, Syracuse (2015); Feasibility and the Ethics of War Workshop, ANU (2014); CRNAP Oxford-ANU-Princeton Workshop, ANU (2014); Ethics and War Conference, UCSD (2013); Workshop at Stockholm Centre for the Ethics of War and Peace (2014); Self-defence workshop, Centre for Human Values, Princeton (2013); 'Ethics and Law in War', ELAC workshop, University of Oxford (2011); 'Why We Fight: The Purposes of Military Force', ELAC workshop, University of Oxford (2010); Workshop on Ethics, Jus Post Bellum, and International Law, CAPPE, ANU (2010); 'Asymmetric Wars, International Relations, and Just War Theory', Belgrade University (2010); Oxford and Princeton Global Norms/Global Justice Research Collaboration, University of Oxford (2009).
10

cat cv/impact.md

Research Impact
Research impact

2025 Platform Agents research informed policy consultation with industry lobbyists, senate staffers, and Consumer Reports, Power and AI research extensively engaged with in UNDP Human Development Report,

https://hdr.undp.org/system/files/documents/global-report-document/hdr2025reporten.pdf

2023 Contributing Author to Rapid Response Information Report on Generative AI by Australian Council of Learned Academies

21-22 Member of 14-person US National Academies of Science, Engineering and Medicine Study Committee on 'Ethics and Governance of Responsible Computing Research', resulting in publication of 150-page co-authored report, led by Professor Barbara Grosz (Harvard)

2021 Contributing Author to Rapid Research Information Forum on Motivators for use of the COVIDSafe App

2020 Public Submission (with other members of HMI and Australian Academy of Science) to Human Rights Commission in response to Human Rights and Technology Report

2020 Participated in Gradient Institute-led workshop on the ethics of insurance pricing with IAG

2019 Public Submission (with Bob Williamson and the Australian Academy of Science) to the DIIS consultation on Australia's Ethical Framework for AI.

2019 Invited Speaker at Plan Jericho/Trusted Autonomous Systems/Defence Science and Technology workshop on Ethical AI for Defence

2019 Invited Participant at Defense Innovation Board Roundtable on AI policy principles for the US Department of Defense, subsequently provided comments on draft principles

2019 Invited written submission for Defense Innovation Board consultation on AI policy principles

2019 Invited Participant, Defence Science and Technology Megatrends Workshop

2019 Expert Working Group member, Academy of Social Sciences of Australia report to Office of National Intelligence on US National Academies Decadal Survey

2019 Public Submission (with Bob Williamson and the Australian Academy of Science) to the Data61/DIIS consultation on Australia's Ethical Framework for AI

2018 Invited Public Submission to Defense Innovation Board consultation on AI policy principles for the US Department of Defense

2015 Invited participant in MacArthur Foundation and American Academy of Arts and Sciences project on New Technologies and the Ethics of War

2010 Contributed to Advisory Consultation, US Army Professional Military Ethics code

11

cat cv/supervision.md

HDR Supervision
HDR supervision

2011- PhD Panel Chair, School of Philosophy, ANU

Kira Breithaupt (started 2025), Iman Ferestade (started 2025), Tim Dubber (started 2025), Andrew Smart (started 2024), Jake Stone (started 2021), Max Fedoseev (PhD 2024), Josef Holden (PhD 2023), Chad Lee-Stronach (PhD 2019), Adam Gastineau (MPhil 2016). Fedoseev and Lee-Stronach both won VC Teaching Awards for tutoring. Lee-Stronach is now TT at Northeastern after a Stanford Postdoc. Stone has been offered a postdoc working with Sandra Wachter.

2011- PhD Panel Member, School of Philosophy, ANU

Emily Leijer, James Lim, Chris Lernpass, Jenny Munt, Shalom Chalson (PhD 2025), Kirsten Mann (PhD 2024), Devon Cass (PhD 2020), Heather Browning (PhD 2020), Ten Herng Lai (PhD 2020), Adam Bugeja (PhD 2018), Rob Kirby (PhD 2017), Lachlan Umbers (PhD 2017), Matt Hammerton (PhD 2016), Chris Gyngell (PhD 2015), Jonathan Pickering (PhD 2014), Jo Lau (PhD 2013), Stephanie Collins (PhD 2012).

2011- PhD Panel Member, Other

Member of dissertation committee for Adam Betz (University of Illinois, Chicago, PhD 2016)*, and Steve Woodside (Rutgers, PhD 2016)*.

12

cat cv/teaching.md

Coursework Teaching
Coursework teaching

2015- Honours Supervision

Aleks Hammo (2023 H1), Antonio Esposito (2023), Matthew Wiseman (2020 H1), Kida Lin (2019 H1), Eleanor Kay (2018 H1), Oliver Rawle (2017 H1), Kramer Thompson (2017 H1), Julian Christopher Scott (2015 H1).

2019 Co-Convenor of PHIL3073 'The Moral and Political Philosophy of AI'. 70 students.

2019 Foundations Graduate Seminar (PHIL8011) on 'The Philosophy of AI'. 16 students.

2019 Co-Convenor of ANU/Humboldt/Princeton Summer Institute on Normativity. 23 students.

2017 Foundations Graduate Seminar (PHIL8011) on 'Moral Decision Theory'. 16 students.

15-16 Convenor of Philosophy Honours.

13-15 Convenor of PHIL8011 Foundations Seminar.

2013 Foundations Graduate Seminar (PHIL8011) on 'Liberty', with Philip Pettit. »16 students

11-14 Convened Graduate Work-in-Progress group in MSPT.

2014 Introduced and organised new Project Design Review for first year MSPT students

07-09 Pembroke College, Oxford

Designed syllabus and taught tutorials, for courses on Kant's Ethics, War and Global Justice, Marx and Marxism, and Theory of Politics and Ethics. Also taught revision seminars (larger classes). One of my students went on to become an academic philosopher, now at Cardiff.

06-09 Regent's Park, St. Hugh's, St. Peter's, St. Hilda's, Wycliffe Hall, Oxford

Designed syllabus and taught tutorials for War and Global Justice, Theory of Politics, Ethics

06-09 Oxford Overseas Study Course, Taylor University Programme, Oxford

13

cat cv/service.md

Service (Profession)
19 roles
2026 Position Paper Track Chair, NeurIPS 2026
2025- Editor, Oxford Studies in Philosophy of AI and Computing
25- Section Editorial Committee member for Computing Research Repository (arXiv)
25- Associate Editor, Philosophy & Public Affairs
22-25 Executive Committee of ACM Fairness, Accountability and Transparency Conference
24-25 Expert recommender and speculation grantor focused on AI Safety for Survival and Flourishing Fund, reviewer for AI Safety Fund and UK AI Safety Institute.
21- Area Chair: NeurIPS (2025), ICML (2025), CoLM (2024, 2025), FAccT (2022, 2023, 2024, 2025)
21- Program Committee member: AIES (2022, 2023), IJCAI (2021)
21-23 Devised and directed Philosophy, AI and Society Consortium of associated universities, uniting philosophers at ANU, Oxford, Stanford, Toronto, Princeton and Harvard.
19-23 Presidential Nominee on MIT Corporation Visiting Committee for the Department of Linguistics and Philosophy
2019 Expert Working Group, Academy of Social Sciences of Australia, report to Office of National Intelligence on Social Science Research & Intelligence in Australia
18-25 Editor, Philosophers' Imprint
2016- Editorial Board, Oxford Studies in Political Philosophy
15-19 Area Editor (political philosophy), Ergo
2012- Editorial Board, Journal of Political Philosophy
13-14 Social and Political Philosophy editor for PhilPapers
2010 Oxford University Press, Oxford Bibliographies Online Contributed 'War' Entry to online bibliography
2008- Referee: Ethics, Law and Philosophy, Social Theory and Practice, British Journal of Political Science, Political Studies, International Theory, Journal of Moral Philosophy, Philosophy and Psychology, Philosophical Studies, Review of International Studies, Politics, Philosophy and Economics, Journal of Applied Philosophy, Res Publica, Canadian Journal of Philosophy, Ratio, European Journal of Political Theory, Philosophical Quarterly, Philosophical Review,
2008- Refereeing and commentary: Oxford University Press, Routledge, Chicago University Press
mintresearch.org top
0%
0 tokens
Minty