In a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Read MoreIn a forthcoming paper in Philosophy and Phenomenological Research, A.G. Holdier examines how certain types of silence can function as communicative acts that cause discursive harm, offering insights into the pragmatic topography of conversational silence in general.
Read MoreIn a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.
Read MoreMINT Lab affiliate David Thorstad examines the limits of longtermism in a forthcoming paper in the Australasian Journal of Philosophy. The study introduces "swamping axiological strong longtermism" and identifies factors that may restrict its applicability.
Read MoreThe Machine Intelligence and Normative Theory (MINT) Lab at the Australian National University has secured a USD 1 million grant from Templeton World Charity Foundation. This funding will support crucial research on Language Model Agents (LMAs) and their societal impact.
Read MoreProf. Lazar will lead efforts to address AI's impact on democracy during a 2024-2025 tenure as a senior AI Advisor at the Knight Institute.
Read MoreThe Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
Read MoreThis workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
Read MoreA new paper by Andrew Smart and Atoosa Kasirzadeh in AI & Society titled "Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning" explores the importance of social context in explaining machine learning outputs.
Read More