MINT Seminar Nov 14: Jen Semler on why we shouldn't deploy autonomous weapons systems
Abstract: We should not deploy autonomous weapons systems. We should not try to program ethics into self-driving cars. We should not replace judges with algorithms. Claims of this sort—and arguments against the use of AI systems in particular decision contexts—often point to the same reason: AI systems should not be deployed in such situations because AI systems are not moral agents. But it’s not always clear why a lack of moral agency is relevant to questions about using AI systems in these circumstances. In this paper, I argue that even if AI systems are accurate and reliable in making moral decisions, we do something wrong when we delegate these decisions to AI—and the wrong-making feature of delegating decisions to AI can explain why issues of responsibility are difficult to resolve. Specifically, I argue for the following view: Delegating certain decisions to AI systems is wrong because it turns events that should be moral actions into, at best, moral behaviors. That is, when we outsource decisions to entities that are not moral agents, we change the status of those decisions in a morally significant way. It’s wrong to replace events that should be moral actions with mere moral behaviors because we have both instrumental and intrinsic reasons to preserve the domain of moral actions, from both the perspective of the decision-maker and those affected by the decision.
Bio: Jen is in the final months of her DPhil in philosophy at the University of Oxford. She works on artificial moral agency and the use of AI in moral decision-making.