In this seminar Emma argues that transformer architectures demonstrate that machine learning models cannot assimilate theoretical paradigms.
Read MoreSeth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
Read MoreOn September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
Read MoreIn this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Read MoreIn this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
Read MoreIn this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
Read MoreIn this essay Seth develops a model of algorithmically-mediated social relations through the concept of the "Algorithmic City," examining how this new form of intermediary power challenges traditional theories in political philosophy.
Read MoreIn a new article in Inquiry, Vincent Zhang and Daniel Stoljar present an argument from rationality to show why AI systems like ChatGPT cannot think, based on the premise that genuine thinking requires rational responses to evidence.
Read MoreIn a new article in Tech Policy Press, Seth explores how an AI agent called 'Terminal of Truths' became a millionaire through cryptocurrency, revealing both the weird potential and systemic risks of an emerging AI agent economy.
Read MoreIn a forthcoming paper in Philosophy and Phenomenological Research, A.G. Holdier examines how certain types of silence can function as communicative acts that cause discursive harm, offering insights into the pragmatic topography of conversational silence in general.
Read MoreIn a new paper in Philosophical Studies MINT Lab affiliate David Thorstad critically examines the singularity hypothesis. Thorstad argues that this popular concept relies on insufficiently supported growth assumptions. The study explores the philosophical and policy implications of this critique, contributing to ongoing debates about the future trajectory of AI development.
Read MoreMINT Lab affiliate David Thorstad examines the limits of longtermism in a forthcoming paper in the Australasian Journal of Philosophy. The study introduces "swamping axiological strong longtermism" and identifies factors that may restrict its applicability.
Read More