01

cat about.md

Powerful AI systems are arriving at a moment of acute vulnerability for liberal democracy. The institutions and practices that have preserved freedom, equality, and collective self-determination for 250 years are in freefall, and are ill-equipped to absorb the radical changes that AI will bring. The MINT Lab uses philosophical and computational research to chart a course through the AI transition, and to help design the institutions that will preserve liberal democratic values through the next quarter-millennium.

Principal Investigator

Seth Lazar is Professor at the Johns Hopkins University School of Government and Policy and founding director of MINT Lab. He is also Professor of Philosophy at the Australian National University. His research focuses on the philosophy of AI and computing, and on the defence, reinvigoration, and redesign of liberal democratic institutions for the AI transition.

Research Projects

Normative Competence

Normative competence is the ability to recognise and act on the practical reasons that apply to one’s actions. It’s a precondition for any highly capable autonomous system to be trusted in the world. How can we know when an AI system is genuinely normatively competent? What would it mean for there to be another normatively competent agent in the world, besides humans? MINT Lab broaches these questions using technical AI evaluations informed by and informing first-order philosophical research.

Governing Agents

AI has moved beyond the chat window. Language Model Agents are entering into every area of social, economic and political life. What norms should apply to them? What norms should apply to those interacting with them? What new infrastructure and institutions are necessary to enable a safe, decentralised AI agent economy? How can we prevent an era of platform agents that radically concentrates power in the digital economy? How can societies navigate the automation of knowledge work without descending into either destructive populism or radical social inequality?

Post-AGI Political Philosophy

Under some definitions, we already have AGI; today’s AI systems can perform a substantial amount of the tasks humans can perform with a computer. But the rough edges of these systems are soon likely to be smoothed out, and the ceiling to their performance keeps rising. When we do have genuinely powerful AI systems, how should we live together in political communities? What guidance can political philosophy give us in this constitutional moment?

02

grep -r "people" ./lab/

Team

Seth Lazar
Principal Investigator
Philosophy
Daniel Kilov
Lab Manager, Research Fellow
Philosophy, Cybernetics
Secil Yanik Guyot
Research Engineer
Computer Science
Ned Howells-Whitaker
Research Fellow
Philosophy
Jennifer Munt
Research Fellow
Philosophy
Kira Breithaupt
PhD Student
Philosophy, Cognitive Science
Tim Dubber
PhD Student
Philosophy
Iman Ferestade
PhD Student
Philosophy
Cameron Pattison
PhD Student
Philosophy
Andrew Smart
PhD Student
Philosophy
Jake Stone
PhD Student
Philosophy
Caroline Hendy
Research Assistant
Linguistics, Statistics
Theo Murray
Research Assistant
Philosophy
Charis Yang
Research Assistant
Philosophy
Sichao Li
Visiting Fellow
Computer Science
Elena Ajayi
Visiting Student
Computer Science
Abbas Bagwala
Visiting Student
Philosophy
Noah Birnbaum
Visiting Student
Philosophy
Angelica Chowdhury
Visiting Student
Computer Science
ChunYan (CY)
Visiting Student
Computer Science, Philosophy
Shira Gur Arieh
Visiting Student
Law
Changbai Li
Visiting Student
Computer Science
Lorenzo Manuali
Visiting Student
Philosophy
Lena Wang
Visiting Student
Philosophy

Affiliates

Syed AbuMusab
Philosophy
Michael Barnes
Philosophy
Michael Bennett
Computer Science
Matt Boulos
Law
Tiberio Caetano
Computer Science
Beba Cibralic
Philosophy, Industry
Ned Cooper
STS
Paul de Font-Reaulx
Philosophy
Sean Donahue
Philosophy
Sina Fazelpour
Philosophy
Roberta Fischli
Philosophy
Iason Gabriel
Philosophy, CS
Jacqueline Harding
Philosophy
Bec Johnson
Philosophy
Sayash Kapoor
Computer Science
Noam Kolt
Law
Elsa Kugelberg
Philosophy
Henrik Kugelberg
Philosophy
Anton Leicht
Philosophy
Anna Leshinskaya
Cognitive Science
Sydney Levine
Cognitive Science
Luise Mueller
Philosophy
Ninell Oldenburg
Philosophy
Tiziano Piccardi
Computer Science
Ben Robinson
Philosophy
Nick Schuster
Philosophy
Jen Semler
Philosophy
Aaron Snoswell
Computer Science
Simon Taylor
STS
Luke Thorburn
Computer Science
David Thorstad
Philosophy
Kangyu (KY) Wang
Philosophy
Kimberlee Weatherall
Law
Xinyue Xu
Computer Science
Zhihe (Vincent) Zhang
Philosophy
Alumni (30)
Christine Balasa
Philosophy
Luca Belli
CS, Industry
Claire Benn
Philosophy
Glen Berman
Computer Science
Damian Clifford
Law
William D’Alessandro
Philosophy
Antonio Esposito
Philosophy
Max Fedoseev
Philosophy
J. Dimitri Gallow
Philosophy
Aleks Hammo
Philosophy
Jessie He
Philosophy
Jenny Judge
Philosophy
Nick Laskowski
Philosophy
James Leib
Philosophy
Emily Leijer
Philosophy
Sadhika Malladi
Computer Science
Corey McCabe
Philosophy
Silvia Milano
Philosophy
Harrison Munday
Philosophy
Ehsan Nabavi
Cybernetics
Rhiannon Nielsen
International Relations
Elija Perrier
Computer Science
Giada Pistilli
Philosophy, Industry
Pamela Robinson
Philosophy
Olivia Shen
Policy
Fei Song
Philosophy
Kristina Vaia
Public Policy
Charles Wan
Computer Science
Cedric Whitney
Information Science
Shang Long Yeo
Philosophy
03

ls -la publications/

1.
Resource Rational Contractualism Should Guide AI Alignment 02/2026
Levine, Franklin, Zhi-Xuan, Yanik Guyot, Wong, Kilov, Choi, Tenenbaum, Goodman, Lazar, Gabriel · IASEAI 2026

Proposes grounding AI alignment in contractualist agreements that diverse stakeholders would endorse, combining resource rationality with moral philosophy.

View source →
2.
Discerning What Matters 02/2026
Kilov, Hendy, Yanik Guyot, Snoswell, Lazar · IASEAI 2026

Reviews existing approaches to evaluating LLM moral competence, identifies three significant limitations, and proposes a multi-dimensional assessment framework.

View source →
3.
Using LLMs to Advance Democratic Values 01/2026
Lazar, Manuali · Minds and Machines

Assesses whether LLMs can support democratic deliberation through summarisation, opinion aggregation, and preference prediction, arguing they should strengthen the informal public sphere rather than automate formal decision-making.

View source →
4.
Beyond Verdicts: Evaluating Language Model Moral Competence 01/2026
Snoswell, Kilov, Lazar · AAAI

Proposes a richer framework for evaluating ethical competence in LLMs that goes beyond binary verdict accuracy to assess the quality of moral reasoning.

View source →
5.
Trustworthy Computation in Engineer's Simulations 12/2025
Ferestade · Synthese

Examines the epistemological foundations of trustworthy computation in engineering simulations, analysing when and why we should trust computational results.

View source →
6.
The AI Power Disparity Index 10/2025
Kim, Kuehnert, Singh, Heidari, Lazar · AAAI AIES

Develops a compound measure of how AI actors' power shapes the AI ecosystem, providing an empirical tool for tracking concentration and disparity.

View source →
7.
Infrastructure for AI Agents 10/2025
Chan et al., Perrier, Lazar · TMLR

Examines how AI agents interact in open-ended environments and argues that making agents useful and safe requires infrastructure-level interventions beyond modifying agent behaviour directly.

View source →
8.
The Moral Case for Using Language Model Agents for Recommendation 09/2025
Lazar, Thorburn, Jin, Belli · Inquiry

Makes a moral case for deploying language model agents in recommendation systems, arguing they could better serve user interests than current algorithmic approaches.

View source →
9.
AI Agents and Democratic Resilience 09/2025
Lazar, Cuellar · Knight Institute

Examines how AI agents may affect democratic institutions and proposes strategies for ensuring democratic resilience in the face of increasingly autonomous AI systems.

View source →
10.
Build Agent Advocates, Not Platform Agents 06/2025
Kapoor, Kolt, Lazar · ICML 2025

Argues that if dominant platform companies control AI agents, the resulting systems will deepen surveillance and tighten lock-in. Proposes user-centric agent advocates as an alternative.

View source →
11.
Using LLMs to Enhance Democracy 06/2025
Lazar, Manuali · FAccT 2025

Non-archival conference presentation of the democracy and LLMs research at ACM FAccT.

View source →
12.
Moral Agency without Consciousness 06/2025
Semler · Canadian Journal of Philosophy

Argues that moral agency does not require consciousness, with implications for how we evaluate the moral status of AI systems.

View source →
13.
Moral Disagreement and the Limits of AI Value Alignment 06/2025
Kilov, Schuster · AI & Society

Explores how persistent moral disagreement poses a dual challenge for AI value alignment, questioning both epistemic justification and practical feasibility.

14.
AI and Democratic Freedoms 05/2025
Lazar (ed.) · Knight Institute

Edited collection exploring the intersections between artificial intelligence and core democratic freedoms, bringing together diverse perspectives on AI governance.

View source →
15.
Anticipatory AI Ethics 05/2025
Lazar · Knight Institute

Argues for an anticipatory approach to AI ethics that addresses the societal impacts of emerging AI capabilities before they are widely deployed.

View source →
16.
No Right to an Explanation 05/2025
Karlan, Kugelberg · Philosophy and Phenomenological Research

Challenges the widely held view that individuals have a right to explanations for automated decisions, examining the philosophical foundations of explainability requirements.

17.
Governing the Algorithmic City 01/2025
Lazar · Philosophy & Public Affairs

Drawing on Dewey, argues that computing technologies have transformed political association analogously to industrialisation, and develops a framework for governing the resulting algorithmic city.

View source →
18.
Military AI Cyber Agents (MAICAs) Constitute a Global Threat to Critical Infrastructure 01/2025
Dubber, Lazar · NeurIPS 2025, RegML Workshop

Argues that autonomous military AI cyber agents create a credible pathway to catastrophic risk, and proposes political, legal, and technical countermeasures.

View source →
19.
The Rise and Fall of the First AI Agent Millionaire 10/2024
Lazar · Tech Policy Press

A speculative essay exploring the social and economic implications of autonomous AI agents acquiring and deploying financial resources.

View source →
20.
Automatic Authorities: Power and AI 09/2024
Lazar · Collaborative Intelligence (CUP)

Examines how ML and computational technologies create new forms of power over individuals, as corporations and diminished states exercise authority through automated systems.

View source →
21.
Legitimacy, Authority, and Democratic Duties of Explanation 04/2024
Lazar · Oxford Handbook (OUP)

Develops an account of democratic duties of explanation grounded in political legitimacy and authority, with implications for AI-driven decision-making.

View source →
22.
Frontier AI Ethics 02/2024
Lazar · arXiv / Aeon

Bridges near-term AI harms and long-term existential risk, arguing that language model agents demand anticipatory ethical evaluation of societal impacts.

View source →
23.
Position: On the Societal Impact of Open Foundation Models 02/2024
Bommasani et al., Lazar · ICML 2024

Identifies five distinctive properties of open foundation models that shape their societal impact and analyses the resulting benefits, risks, and policy implications.

View source →
24.
Attention, Moral Skill, and Algorithmic Recommendation 01/2024
Lazar, Schuster · Philosophical Studies

Analyses how algorithmic recommendation systems affect the development and exercise of moral skill, focusing on the role of attention in ethical perception.

View source →
25.
Normative Theory and AI 12/2023
Lazar, Benn, Karhu, Robinson (eds.) · Springer collection

Edited collection bringing together normative theorists and AI researchers to address foundational ethical and political questions raised by artificial intelligence.

View source →
26.
Communicative Justice and the Distribution of Attention 10/2023
Lazar · Knight Institute

Argues that algorithmic intermediaries govern the digital public sphere through architectures, amplification, and moderation, distributing attention in ways that raise questions of communicative justice.

View source →
27.
On the Site of Predictive Justice 08/2023
Lazar, Stone · Nous

Examines where justice demands intervene in predictive systems, analysing the appropriate locus of moral evaluation for algorithmic prediction.

View source →
28.
The Political Philosophy of Data and AI 05/2022
Lazar, Vredenburgh, Haslanger (eds.) · CJP special issue

Special issue exploring the political philosophy dimensions of data collection, algorithmic systems, and artificial intelligence.

View source →
29.
Automatic Authorities: Power and AI 02/2022
Lazar · Oxford Handbook of AI Governance

Examines how ML and computational technologies create new forms of power over individuals, as corporations and diminished states exercise authority through automated systems.

View source →
30.
What's Wrong with Automated Influence 09/2021
Benn, Lazar · Canadian Journal of Philosophy

Develops a philosophical account of what makes automated influence -- such as algorithmic recommendation and targeted advertising -- morally objectionable.

View source →
▓▓▓░░░░░░░░░░░░░░░ 1–5 of 30 ↑↓ navigate · ←→ page · enter expand
04

ls -la events/

1.
Workshop on Normative Competence 02/2026
Co-organised at IASEAI 2026 · IASEAI 2026

Workshop at the International Association for Safe and Ethical AI examining normative competence in AI systems.

View source →
2.
AGI-Ready Institution Design Workshop 02/2026
Co-organised, collocated with IASEAI 2026 · IASEAI 2026

Workshop exploring how democratic institutions can be redesigned to remain effective in the face of advanced AI systems.

3.
AI and Democratic Freedoms Public Symposium 10/2025
Public symposium on AI and democratic freedoms · Knight Institute, Columbia

Public symposium at the Knight First Amendment Institute exploring how AI technologies affect core democratic freedoms.

View source →
4.
Sociotechnical AI Safety Retreat 05/2025
Private research retreat · Kioloa Coastal Campus

Private retreat bringing together researchers to advance work on the sociotechnical dimensions of AI safety.

5.
AI and Democratic Freedoms Symposium 11/2024
Private workshop and public symposium · Knight Institute, Columbia

Combined private workshop and public symposium at the Knight Institute on AI and democratic freedoms, with open call for abstracts.

View source →
6.
MINT-Yale Law School Workshop 09/2024
Normative Philosophy of Computing · Yale Law School

Joint workshop with Yale Law School on normative philosophy of computing, bridging legal and philosophical perspectives.

View source →
7.
Workshop on Sociotechnical AI Safety 06/2024
International workshop · ITS Rio de Janeiro

International workshop at the Institute for Technology and Society in Rio de Janeiro on sociotechnical approaches to AI safety.

8.
Normative Philosophy of Computing Workshop 06/2024
Research workshop · Kioloa Coastal Campus

Research workshop at ANU's Kioloa campus advancing the normative philosophy of computing research programme.

9.
AI and Catastrophic Risk at ANU 03/2024
Workshop on catastrophic AI risk · Australian National University

Workshop examining pathways to catastrophic risk from AI and potential mitigation strategies.

10.
Workshop on Sociotechnical AI Safety 11/2023
Research workshop · Stanford HAI

Research workshop at Stanford's Human-Centered AI Institute on sociotechnical approaches to AI safety.

View source →
11.
Workshop on Democracy and AI 11/2023
Joint workshop with Carnegie Endowment · Carnegie Endowment for International Peace

Joint workshop with the Carnegie Endowment for International Peace exploring the relationship between democratic governance and AI.

View source →
12.
PAIS Doctoral Colloquium 03/2023
Doctoral research colloquium · Oxford

Two-day doctoral colloquium for the Philosophy, AI and Society consortium, hosted at Oxford.

View source →
13.
PAIS Workshop 01/2023
Philosophy, AI and Society workshop · Stanford McCoy Center

Philosophy, AI and Society workshop bringing together researchers across disciplines at Stanford's McCoy Center.

View source →
14.
MINT Lab Launch: Jamie Susskind Public Lecture 04/2022
Lab launch event with public lecture · ANU

Launch event for the MINT Research Lab at ANU, featuring a public lecture by Jamie Susskind.

View source →
15.
PAIS Workshop 03/2022
Philosophy, AI and Society workshop · Harvard

Inaugural Philosophy, AI and Society workshop hosted at Harvard University.

▓▓▓░░░░░░ 1–5 of 15 ↑↓ navigate · ←→ page · enter expand
05

tail -f news.log

1.
conference Trustworthy AI — Invited Talk 04/2026
Jennifer Munt — invited talk on trustworthy AI · University of Hong Kong

Invited talk at the University of Hong Kong on norm-responsiveness and accountability in trustworthy AI systems.

2.
conference IASEAI 2026 Presentations 02/2026
Daniel Kilov — conference presentations and panel · IASEAI 2026

Multiple presentations and a panel discussion at the International Association for Safe and Ethical AI conference.

View source →
3.
media On Claude's New Constitution 01/2026
Lorenzo Manuali — analysis of Anthropic's model constitution · Philosophy of Computing (Substack)

Published in Philosophy of Computing, analysing the normative foundations and implications of Anthropic's new model constitution for Claude.

View source →
4.
conference NeurIPS Workshops: Discerning What Matters 12/2025
Poster and EvalEval/UK AISI Workshop presentation · NeurIPS Workshops, San Diego

Poster presentation and workshop talk at NeurIPS on multi-dimensional assessment of moral competence in LLMs.

View source →
5.
conference NeurIPS Workshops: Military AI Cyber Agents 12/2025
Conference presentation · NeurIPS Workshops, San Diego

Presentation at NeurIPS on the catastrophic risks posed by autonomous military AI cyber agents to critical infrastructure.

View source →
6.
media AI Agents Feature (quoted) 12/2025
Quoted in feature on AI agents · MIT Technology Review

Published in MIT Technology Review, discussing the control and autonomy dynamics of AI agents like Manus and OpenAI's Operator.

View source →
7.
appointment MATS Cohort 9 Scholar — Iman Ferestade 11/2025
ML Alignment & Theory Scholars program · MATS

MINT lab member Iman Ferestade selected for the competitive ML Alignment & Theory Scholars programme (Cohort 9), mentored by Neel Nanda at Google DeepMind.

View source →
8.
update Philosophy of AI Summer School — Iman Ferestade 07/2025
Summer school participation · Paris

MINT lab member Iman Ferestade participated in the Philosophy of AI Summer School in Paris.

View source →
9.
update UNDP Human Development Report Features MINT Research 06/2025
Lazar's work on algorithmic power highlighted · UNDP

The 2025 UNDP Human Development Report extensively engages with Lazar's research on power and AI.

View source →
10.
appointment Visiting Faculty Researcher at Google DeepMind 06/2025
Research visit · Google DeepMind

Six-month visiting faculty researcher appointment at Google DeepMind.

11.
keynote Gianelli Annual Lecture: On AI Personhood Without Sentience 04/2025
Inaugural Giannelli Lecture in Philosophy of Science · St. John's University

Inaugural Arthur and Barbara Giannelli Lecture in Philosophy of Science, examining whether AI systems can be persons without being sentient.

View source →
12.
conference The Philosophy of AI: Themes from Seth Lazar 01/2025
Workshop and festschrift on Lazar's work · University of Hong Kong

Workshop dedicated to themes from Lazar's work in the philosophy of AI, hosted at the University of Hong Kong.

View source →
13.
appointment MINT Lab Moves to Johns Hopkins University 01/2025
Lab relocates to School of Government and Policy · Johns Hopkins University

The MINT Research Lab relocates from ANU to the School of Government and Policy at Johns Hopkins University.

14.
keynote NeurIPS Workshop Keynote: Philosophical Foundations for Pluralistic Alignment 12/2024
Invited workshop keynote · NeurIPS Workshops 2024

Invited keynote at the NeurIPS Workshop on Pluralistic Alignment, presenting philosophical foundations for aligning AI with diverse values.

View source →
15.
keynote NeurIPS Workshop Keynote: Evaluating LLM Ethical Competence 12/2024
Workshop keynote presentation · NeurIPS Workshops 2024

Keynote at the NeurIPS Workshop on Algorithmic Fairness, presenting work on evaluating ethical competence in large language models.

View source →
16.
media Seth Lazar on Legitimate Power and the Political Philosophy of AI 12/2024
Podcast interview · Generally Intelligent

Podcast interview on Generally Intelligent discussing legitimate power, moral nuance, and the political philosophy of AI.

View source →
17.
conference LLM Agents: Prospects and Impacts 06/2024
FAccT tutorial on LLM agents · FAccT 2024

Tutorial at the ACM FAccT conference on the prospects and societal impacts of LLM-based agents.

View source →
18.
media Can We Really Trust AI to Channel the Public's Voice? 04/2024
Opinion piece on AI and democratic participation · The Guardian

Published in The Guardian, questioning whether LLMs can reliably represent public opinion in democratic processes.

View source →
19.
keynote Scholl Lecture: Catastrophic AI Risk 03/2024
Named lecture on catastrophic AI risk · Purdue University

Scholl Lecture at Purdue University on catastrophic AI risk and what should be done about it.

View source →
20.
media Normative Philosophy of Computing 03/2024
Podcast interview on normative philosophy of computing · The Gradient

Podcast interview on The Gradient discussing the normative philosophy of computing research programme.

View source →
21.
keynote EWAF Keynote: Catastrophic AI Risk 02/2024
Keynote on catastrophic AI risk · European Workshop on Algorithmic Fairness

Keynote at the European Workshop on Algorithmic Fairness on what, if anything, should be done about catastrophic AI risk.

View source →
22.
media Can Democracy Survive Artificial General Intelligence? 02/2024
Essay on AGI and democratic governance · Tech Policy Press

Published in Tech Policy Press, examining whether democratic institutions can withstand the challenges posed by artificial general intelligence.

View source →
23.
appointment Seth Lazar Joins Knight Institute as Senior AI Advisor 01/2024
Senior AI Advisor appointment, 2024-25 · Knight First Amendment Institute

Appointed Senior AI Advisor at the Knight First Amendment Institute at Columbia University for 2024-25.

View source →
24.
appointment Non-resident Scholar at Carnegie Endowment 01/2024
Ongoing appointment as non-resident scholar · Carnegie Endowment for International Peace

Two-year appointment as Non-resident Scholar at the Carnegie Endowment for International Peace.

View source →
25.
appointment Visiting Professor at University of Hong Kong 01/2024
Visiting professorship with festschrift · University of Hong Kong

Three-month visiting professorship at the University of Hong Kong, including a festschrift workshop on Lazar's work.

26.
keynote Australian AI Safety Forum 2024
Keynote presentation · Australian AI Safety Forum

Keynote presentation at the Australian AI Safety Forum on catastrophic AI risk.

27.
media The US is Racing Ahead in AI 11/2023
Opinion piece on US AI policy · The Guardian

Published in The Guardian, analysing the divergence between US and EU approaches to AI regulation and governance.

View source →
28.
media Political Philosophy in the Age of AI 09/2023
Podcast interview on AI political philosophy · Philosophy Bites

Interview on the Philosophy Bites podcast discussing how AI transforms the landscape of political philosophy.

View source →
29.
media AI: Is it Out of Control? 08/2023
Podcast episode on AI risks · Science Vs (Spotify)

Featured on Spotify's Science Vs podcast, discussing whether AI development is outpacing governance and safety measures.

View source →
30.
media AI Safety on Whose Terms? 07/2023
Co-authored perspective with Alondra Nelson · Science (AAAS)

Published in Science, co-authored with Alondra Nelson, examining whose interests and values are centred in AI safety discourse.

View source →
31.
media Machines and Morality 06/2023
Feature on AI and moral reasoning · New York Times

Featured in a New York Times series on ChatGPT and morality, exploring whether AI systems can engage in moral reasoning.

View source →
32.
update ACOLA Expert Contributor: Generative AI Report 03/2023
Contributed to Australian Chief Scientist's AI report · Australian Chief Scientist's Office

Contributed as expert to the Australian Chief Scientist's report on generative AI, language models, and multimodal foundation models.

View source →
33.
media Model Alignment Protects Against Accidental Harms 02/2023
Co-authored with Narayanan on alignment limitations · AI Snake Oil

Published on AI Snake Oil, co-authored with Arvind Narayanan, arguing that model alignment addresses accidental harms but not intentional misuse.

View source →
34.
keynote Stanford Tanner Lectures: Algorithmic Governance and Political Philosophy 01/2023
Tanner Lectures on AI and Human Values · Stanford University

Tanner Lectures on AI and Human Values at Stanford, presenting two lectures on governing the algorithmic city and communicative justice.

View source →
35.
media Is Avoiding Extinction from AI Really an Urgent Priority? 01/2023
Essay on AI existential risk priorities · AI Snake Oil

Published on AI Snake Oil, critically examining the case for prioritising AI existential risk over nearer-term harms.

View source →
36.
update National Academies: Responsible Computing Research 10/2022
Ethics and governance of computing report · National Academies of Sciences

Contributed to the National Academies report on the ethics and governance of computing research and its applications.

View source →
37.
keynote Kamm Lecture: The Nature and Justification of Algorithmic Power 07/2022
Mala and Solomon Kamm Lecture in Ethics · Harvard University

Mala and Solomon Kamm Lecture in Ethics at Harvard, examining the nature and justification of power exercised through algorithmic systems.

View source →
▓▓▓░░░░░░░░░░░░░░░░░░░░░ 1–5 of 37 ↑↓ navigate · ←→ page · enter expand

We acknowledge the Traditional Custodians of Country throughout Australia, and their continuing connection to culture, community, land, sea and sky. We pay our respects to Elders past, present and future.

© MINT Research Lab · Johns Hopkins University

exit 0

mintresearch.org top
0%
0 tokens