The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.
Read MoreIn this piece for Tech Policy Press, Anton Leicht argues that future AI progress might not proceed linearly and we should prepare for potential plateaus and sudden leaps in capability. Leicht cautions against complacency during slowdowns and advocates for focusing on building capacities to navigate future uncertainty in AI development.
Read MoreThe Machine Intelligence and Normative Theory (MINT) Lab has been awarded a US$480,000 grant from the Survival and Flourishing DAF (Donor Advised Fund). This gift will support research by the MINT lab into sociotechnical AI safety—the integration of multidisciplinary perspectives with technical research on mitigating direct risks caused by AI systems operating without immediate human supervision.
Read MoreAndrew Smart and colleagues presented a tutorial session at FAccT 2024 that aims to broaden the discourse around AI safety beyond alignment and existential risks, incorporating perspectives from systems safety engineering and sociotechnical labour studies while emphasising participatory approaches.
Read MoreAs AI continues to permeate every aspect of our society, the evolution of adversarial machine learning from a niche academic field to a critical component of global cyber security underscores the significance of this research.
Read MoreSeth presented a tutorial on the rise of Language Model Agents at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Read MoreThe UK government is considering the use of Large Language Models to summarise and analyse submissions during public consultations. Seth weighs in on the considerations behind such a suggestion for the Guardian.
Read MoreSeth was invited on the The Gradient podcast to discuss the risks, challenges and benefits of developing publicly-minded AI, as well as the philosophical challenges those questions pose.
Read MoreSeth was invited on the Generally Intelligent podcast to discuss issues of power, legitimacy, and the political philosophy of AI.
Read MoreSeth has completed a book chapter forthcoming with MIT Press. The book is Collaborative Intelligence: How Humans and AI are Transforming our World, edited by Arathi Sethumadhavan and Mira Lane.
Read MoreSeth was invited to deliver the Scholl Lecture at Purdue University on 3 April. Seth’s presentation focused on how we should currently respond to the kind of catastrophic risks posed by AI systems, which often dominate contemporary discourse in the normative philosophy of computing.
Read MoreMichael Barnes presented at the Second Annual Penn-Georgetown Digital Ethics Workshop. The presentation (co-authored with Megan Hyska, Northwestern University) was titled “Interrogating Collective Authenticity as a Norm for Online Speech,” and it offers a critique of (relatively) new forms of content moderation on major social media platforms.
Read More