To develop morally-sensitive artificial intelligence we have to figure out how to incorporate nonconsequentialist reasoning into mathematical decision theory. This paper, part of a broader project on duty under doubt, explores one specific challenge for this task.
Read MoreSeth Lazar, with Alan Hájek and lead editor Renee Bolinger, co-edited a special issue of leading philosophy of science journal Synthese on 'Norms for Risk'.
Read MoreOn the shoulders of the Stanford Human-Centred AI Institute's fall conference in 2019, Seth Lazar and Stanford's Rob Reich co-convened a one-day workshop to explore the morality, law and politics of data and AI.
Read MoreTogether with Stanford's Rob Reich, Seth Lazar co-convened a session of the Human-Centred AI Institute's fall conference on AI Ethics, Policy and Governance, on 'New Directions in AI Ethics'.
Read More