In this paper, Alan Chan, Kevin Wei, Sihao Huang, Nitarshan Rajkumar, Elija Perrier, Seth Lazar, Gillian K. Hadfield, Markus Anderljung investigate the infrastructure we need to bring about the benefits and manage the risks of AI Agents.
Read MoreIn a forthcoming paper in the ACM Journal on Responsible Computing Jake Stone and Brent Mittelstadt consider how we ought to legitimate automated decision making.
Read MoreIn a forthcoming paper in AI & Society Sean Donahue argues that while common objections to epistocracy may not apply to AI governance, epistocracy remains fundamentally flawed.
Read MoreOn December 5, Seth Lazar presented at the Lingnan University Ethics of Artificial Intelligence Conference on rethinking how we evaluate LLM ethical competence. His talk critiqued current approaches focused on binary ethical judgments, arguing instead for evaluations that assess LLMs' capacity for substantive moral reasoning and justification.
Read MoreA paper by MINT Lab affiliates Ned Cooper and Glen Berman, along with co-authors Wesley Hanwen Deng and Ben Hutchinson, was accepted for poster presentation in the NeurIPS Workshop on Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI.
Read MoreIn November, MINT Lab Research Fellow Sean Donahue traveled to Sydney, Hong Kong, and Carnegie Mellon universities to present his research on platform legitimacy and digital governance.
Read MoreSeth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreThis week at the MINT Lab Seminar, Jake Stone presented his research arguing that corporate involvement in open source AI isn't simply exploitative, but can create mutually beneficial partnerships when properly governed.
Read MoreThis week, Harriet Farlow and Tania Sadhani presented their framework for analyzing AI incident likelihood. Developed through a collaboration between Mileva Security Labs, ANU MINT Lab, and UNSW, with funding from Foresight, their work aims to bridge short and long-term AI risks through practical quantification methods.
Read MoreThis week Robert Long spoke to the lab about his newest paper with MINT lab affiliate Jacqueline Harding arguing that near-term AI systems may develop consciousness, requiring immediate attention to AI welfare considerations.
Read MoreIn this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Read MoreSeth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
Read More