Seth Lazar has been invited to attend a convening of the Network of AI Safety Institutes hosted by the US AISI, to take place in San Francisco on November 20-21.
Read MoreOn September 30-October 1 MINT co-organised a workshop convened by Imbue, a leading AI startup based in San Francisco, focused on assessing the prospective impacts of language model agents on society through the lens of classical liberalism.
Read MoreIn this seminar Jen Semler presents her work examining why delegating moral decisions to AI systems is problematic, even when these systems can make reliable judgements.
Read MoreProfessor Seth Lazar will be a keynote speaker at the inaugural Australian AI Safety Forum 2024, joining other leading experts to discuss critical challenges in ensuring the safe development of artificial intelligence.
Read MoreIn this seminar Tim Dubber presents his work on fully autonomous AI combatants and outlines five key research priorities for reducing catastrophic harms from their development.
Read MoreThe Knight First Amendment Institute invites submissions for its spring 2025 symposium, “Artificial Intelligence and Democratic Freedoms.”
Read MoreThis workshop aims to bring together the best philosophical work on normative questions raised by computing, and in addition to identify and connect early career scholars working on these questions. It will feature papers that use the tools of analytical philosophy to frame and address normative questions raised by computing and computational systems.
Read MoreThe fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
Read MoreAndrew Smart and colleagues presented a tutorial session at FAccT 2024 that aims to broaden the discourse around AI safety beyond alignment and existential risks, incorporating perspectives from systems safety engineering and sociotechnical labour studies while emphasising participatory approaches.
Read MoreMichael Barnes presented at the Second Annual Penn-Georgetown Digital Ethics Workshop. The presentation (co-authored with Megan Hyska, Northwestern University) was titled “Interrogating Collective Authenticity as a Norm for Online Speech,” and it offers a critique of (relatively) new forms of content moderation on major social media platforms.
Read MoreOn 23 March 2024 Nick Schuster presented his paper “Role-Taking Skill and Online Marginalization” (co-authored by Jenny Davis) at the American Philosophical Association's 2024 Pacific Division Meeting in Portland, Oregon.
Read MoreHow should we respond to those who aim at building a technology that they acknowledge could be catastrophic? How seriously should we take the societal-scale risks of advanced AI? And, when resources and attention are limited, how should we weigh acting to reduce those risks against targeting more robustly predictable risks from AI systems?
Read More