Seth wrote an article in Aeon to explain the suite of ethical issues being raised by AI agents built out of generative foundation models (Generative Agents). The essay explores the strengths and weaknesses of methods for aligning LLMs to human values, as well as the prospective societal impacts of Generative Agents from AI companions, to Attention Guardians, to universal intermediaries.
Read MoreIn this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Read MoreOn September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
Read MoreIn this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Read MoreIn this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
Read MoreIn this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
Read MoreIn a new article in Inquiry, Vincent Zhang and Daniel Stoljar present an argument from rationality to show why AI systems like ChatGPT cannot think, based on the premise that genuine thinking requires rational responses to evidence.
Read MoreIn a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Read MoreThe Machine Intelligence and Normative Theory (MINT) Lab at the Australian National University has secured a USD 1 million grant from Templeton World Charity Foundation. This funding will support crucial research on Language Model Agents (LMAs) and their societal impact.
Read MoreProf. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Read MoreThis workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
Read More