Recent work on the moral and political philosophy of AI

Recent work by Minties on Philosophy and AI

If you're interested in why explanations matter, check out https://arxiv.org/abs/2208.08628, where I argue that when computational systems are used to exercise (especially governing) power, explanations are necessary for publicity, and therefore procedural legitimacy and proper authority.

If you want to know more about how I'm thinking about power and AI, here's a deeper dive just on that topic: https://t.co/vMIx7k8MGs

With Claire Benn, in this paper on What's Wrong With Automated Influence, we explore objections grounded in privacy, exploitation and manipulation, arguing that in each case structural versions of the objection fare better than individualistic ones: https://t.co/4IpDZLYzdY

Here's a draft with Jake Stone on predictive justice: the thesis that differential epistemic performance of predictive models is itself wrong, independently of its downstream causal effects (even if the latter ultimately matter more): https://mintresearch.org/pj.pdf

Still under wraps, but can share if you write to me, a paper with (and led by) Nick Schuster on Attention, Moral Skill, and Recommender Systems.

Same goes for drafts for my upcoming Tanner Lectures on AI and Human Values at @StanfordHAI, which focus on Algorithmic Governance and Political Philosophy: one on 'Governing the Algorithmic City', the other on 'Communicative Justice and the Political Philosophy of Attention' (ready in a couple weeks).

ResourcesSeth Lazar