Seth Lazar gave a tutorial on power in political philosophy to attendees of the ACM FAccT conference on Thursday the 4th of March 2021. Click through for more information.
Read MoreSeth Lazar joined the Templeton World Charity Foundation Diverse Intelligences Summer Institute as faculty, giving talks on the moral and political epistemology of data and AI, and the Value of Explanations.
Read MoreChristian Barry and Seth Lazar consider what justifies requiring some people to bear costs for the sake of others, in the public health response to COVID-19.
Read MoreThis article, in US magazine Barron's, explores how to think about the privacy risks of app-based contact-tracing in the age of big data, arguing that even if tech companies choose wisely and justly, the 'laws' of their operating systems cannot be legitimate. Democratic institutions are the only means we've discovered to legitimate the use of power in complex social systems.
Read MoreSeth Lazar and Colin Klein question the value of basing design decisions for autonomous vehicles on massive online gamified surveys. Sometimes the size of big data can't make up for what it omits.
Read MoreClaire Benn and Seth Lazar recorded an interview with Rashna Farrukh for the Philosopher’s Zone podcast on Radio National. The theme: moral skill and artificial intelligence. Does the automation of moral labour threaten to diminish our capacity for moral judgment, much as automation in other areas has negatively impacted human skill?
Read MoreThe US Defense Innovation Board recently approved a document proposing principles governing the deployment of AI within the Department of Defense. HMI project leader Seth Lazar was invited to an expert panel discussing candidate principles, and made a submission to the Board.
Read MoreTogether with the Australian Academy of Science, HMI team members wrote a submission responding to the Data61 discussion paper: “Artificial Intelligence: Australia’s Ethics Framework”. Read our key recommendations here.
Read MoreIn a joint submission, HMI identified 7 areas for further development in the Human Rights and Technology discussion paper proposed by the Australian Human Rights Commission. The main three concerned: defining ‘AI-informed decision-making’; the demand for explanations; and the absence of a formally link between design and assessment.
Read MoreTogether with Stanford's Rob Reich, Seth Lazar co-convened a session of the Human-Centred AI Institute's fall conference on AI Ethics, Policy and Governance, on 'New Directions in AI Ethics'.
Read More