Seth Lazar joined the Templeton World Charity Foundation Diverse Intelligences Summer Institute as faculty, giving talks on the moral and political epistemology of data and AI, and the Value of Explanations.
Read MoreChristian Barry and Seth Lazar consider what justifies requiring some people to bear costs for the sake of others, in the public health response to COVID-19.
Read MoreThis article, in US magazine Barron's, explores how to think about the privacy risks of app-based contact-tracing in the age of big data, arguing that even if tech companies choose wisely and justly, the 'laws' of their operating systems cannot be legitimate. Democratic institutions are the only means we've discovered to legitimate the use of power in complex social systems.
Read MoreSeth Lazar and Colin Klein question the value of basing design decisions for autonomous vehicles on massive online gamified surveys. Sometimes the size of big data can't make up for what it omits.
Read MoreIn this article, co-authored with epidemiologist Meru Sheel, Seth Lazar questions whether tech companies or democratically-elected governments should decide how to weigh privacy against public health, when fundamental rights are not at stake.
Read MoreClaire Benn and Seth Lazar recorded an interview with Rashna Farrukh for the Philosopher’s Zone podcast on Radio National. The theme: moral skill and artificial intelligence. Does the automation of moral labour threaten to diminish our capacity for moral judgment, much as automation in other areas has negatively impacted human skill?
Read MoreThe US Defense Innovation Board recently approved a document proposing principles governing the deployment of AI within the Department of Defense. HMI project leader Seth Lazar was invited to an expert panel discussing candidate principles, and made a submission to the Board.
Read MoreTogether with the Australian Academy of Science, HMI team members wrote a submission responding to the Data61 discussion paper: “Artificial Intelligence: Australia’s Ethics Framework”. Read our key recommendations here.
Read MoreIn a joint submission, HMI identified 7 areas for further development in the Human Rights and Technology discussion paper proposed by the Australian Human Rights Commission. The main three concerned: defining ‘AI-informed decision-making’; the demand for explanations; and the absence of a formally link between design and assessment.
Read MoreIn March 2020 Seth Lazar presented a paper on machine ethics to an interdisciplinary conference at CMU. His respondent was Professor Jonathan Cohen (Princeton).
Read MoreTo develop morally-sensitive artificial intelligence we have to figure out how to incorporate nonconsequentialist reasoning into mathematical decision theory. This paper, part of a broader project on duty under doubt, explores one specific challenge for this task.
Read MoreSeth Lazar, with Alan Hájek and lead editor Renee Bolinger, co-edited a special issue of leading philosophy of science journal Synthese on 'Norms for Risk'.
Read More