New draft ready on legitimacy, authority and the political value of explanations, due to be my keynote for Oxford Studies in Political Philosophy workshop, Tucson October 2022
Read MoreWe argue that, as well as more obvious concerns about the downstream effects of ML-based decision-making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.
Read MoreSeth Lazar's chapter on 'Power and AI: Nature and Justification' is forthcoming in the Oxford Handbook of AI Governance.
Read MoreSeth Lazar and Christian Barry's co-authored paper, Supererogation and Optimisation, has been accepted for publication by the Australasian Journal of Philosophy. The paper explores principles that might underpin either the demand for altruistic efficiency, or the denial of any such demand.
Read MoreClaire Benn and Seth Lazar ask what is wrong with online behavioural advertising and recommender systems, in this paper published in the Canadian Journal of Philosophy.
Read MorePamela Robinson presented ‘Moral Disagreement and Artificial Intelligence’ at AIES'21. Click through for more information.
Read MoreSeth co-chaired the 4th AAAI/ACM Conference on AI, Ethics, and Society, a hybrid conference that took place on 19-21 May 2021.
Read MoreThis paper is a collaboration between HMI, IAG and Gradient, and reflects our broader concern that new methods that use machine learning to influence risk predictions to determine insurance premiums won't be able to distinguish between risks the costs of which people should bear themselves, and those that should be redistributed across the broader population, and might also involve using data points that it is intrinsically wrong to use for this purpose.
Read MoreTo develop morally-sensitive artificial intelligence we have to figure out how to incorporate nonconsequentialist reasoning into mathematical decision theory. This paper, part of a broader project on duty under doubt, explores one specific challenge for this task.
Read MoreSeth Lazar, with Alan Hájek and lead editor Renee Bolinger, co-edited a special issue of leading philosophy of science journal Synthese on 'Norms for Risk'.
Read More