Seth joined Arvind Narayanan and Sayash Kapoor in investigating the limitations of techniques for model alignment. Read more here: https://www.aisnakeoil.com/p/model-alignment-protects-against
Read MoreSeth pens an essay for the Knight First Amendment Institute on the growing need for communicative justice.
Read MoreMichael Barnes has published a new article in Journal of Ethics and Social Philosophy. Michael’s paper is entitled ‘Who Do You Speak For? And How? Online Abuse as Collective Subordinating Speech Acts’.
Read MoreMichael Barnes has published a new article in Sbisà on Speech as Action, as part of a book series Philosophers in Depth (PID). Michael’s paper is entitled ‘Presupposition and Propaganda: A Socially Extended Analysis’.
Read MoreSeth shares some lessons from a conversation with a rogue AI about what imbues humans with moral worth.
Read MoreSeth teams up with AI scientists to reply to the statement on the existential risk posed by AI for humans, suggesting that we can best avoid existential risk from AI by building robust research communities that work to mitigate better-understood risks from concrete AI systems.
Read MoreMichael Barnes has published a new article in Feminist Philosophy Quarterly, as part of a special issue on Feminism, Social Justice, and Artificial Intelligence. Michael’s paper is entitled ‘Online Extremism, AI, and (Human) Content Moderation’.
Read MoreNew draft ready on legitimacy, authority and the political value of explanations, due to be my keynote for Oxford Studies in Political Philosophy workshop, Tucson October 2022
Read MoreWe argue that, as well as more obvious concerns about the downstream effects of ML-based decision-making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.
Read MoreSeth Lazar's chapter on 'Power and AI: Nature and Justification' is forthcoming in the Oxford Handbook of AI Governance.
Read MoreSeth Lazar and Christian Barry's co-authored paper, Supererogation and Optimisation, has been accepted for publication by the Australasian Journal of Philosophy. The paper explores principles that might underpin either the demand for altruistic efficiency, or the denial of any such demand.
Read MoreClaire Benn and Seth Lazar ask what is wrong with online behavioural advertising and recommender systems, in this paper published in the Canadian Journal of Philosophy.
Read More