In this article, co-authored with epidemiologist Meru Sheel, Seth Lazar questions whether tech companies or democratically-elected governments should decide how to weigh privacy against public health, when fundamental rights are not at stake.
Read MoreClaire Benn and Seth Lazar recorded an interview with Rashna Farrukh for the Philosopher’s Zone podcast on Radio National. The theme: moral skill and artificial intelligence. Does the automation of moral labour threaten to diminish our capacity for moral judgment, much as automation in other areas has negatively impacted human skill?
Read MoreThe US Defense Innovation Board recently approved a document proposing principles governing the deployment of AI within the Department of Defense. HMI project leader Seth Lazar was invited to an expert panel discussing candidate principles, and made a submission to the Board.
Read MoreTogether with the Australian Academy of Science, HMI team members wrote a submission responding to the Data61 discussion paper: “Artificial Intelligence: Australia’s Ethics Framework”. Read our key recommendations here.
Read MoreIn a joint submission, HMI identified 7 areas for further development in the Human Rights and Technology discussion paper proposed by the Australian Human Rights Commission. The main three concerned: defining ‘AI-informed decision-making’; the demand for explanations; and the absence of a formally link between design and assessment.
Read MoreIn March 2020 Seth Lazar presented a paper on machine ethics to an interdisciplinary conference at CMU. His respondent was Professor Jonathan Cohen (Princeton).
Read MoreTo develop morally-sensitive artificial intelligence we have to figure out how to incorporate nonconsequentialist reasoning into mathematical decision theory. This paper, part of a broader project on duty under doubt, explores one specific challenge for this task.
Read MoreSeth Lazar, with Alan Hájek and lead editor Renee Bolinger, co-edited a special issue of leading philosophy of science journal Synthese on 'Norms for Risk'.
Read MoreSeth Lazar was awarded (jointly) the ANU Vice Chancellor's Award for Excellence in Research. This is one of the university's highest honours for research excellence, and recognises Seth's contributions to philosophy, and to the leadership of the HMI Project.
Read MoreOn the shoulders of the Stanford Human-Centred AI Institute's fall conference in 2019, Seth Lazar and Stanford's Rob Reich co-convened a one-day workshop to explore the morality, law and politics of data and AI.
Read MoreTogether with Stanford's Rob Reich, Seth Lazar co-convened a session of the Human-Centred AI Institute's fall conference on AI Ethics, Policy and Governance, on 'New Directions in AI Ethics'.
Read MoreAs humans, our skills define us. No skill is more human than the exercise of moral judgment. We are already using Artificial Intelligence (AI) to automate morally-loaded decisions. In other domains of human activity, automating a task diminishes our skill at that task. Will 'moral automation' diminish our moral skill? If so, how can we mitigate that risk, and adapt AI to enable moral 'upskilling'? Our project, funded by the Templeton World Charity Foundation, will use philosophy, social psychology, and computer science to answer these questions.
Read More