Socio-Structural Explanations in ML

A new paper by Andrew Smart and Atoosa Kasirzadeh in AI & Society titled "Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning" explores the importance of social context in explaining machine learning outputs.

Read More
Papers, SAISJ Stone
Advocating for Sociotechnical AI Safety with Alondra Nelson in Science

With former acting White House Office of Science and Technology Policy director, Alondra Nelson, Seth argued against a narrow technical approach to AI safety, calling instead for more work to be done on sociotechnical AI safety, that situates the risks posed by AI as a technical system in the context of the broader sociotechnical systems of which they are part.

Read More
SAIS, Media, Papers, PolicySeth Lazar
Workshop on Sociotechnical AI Safety

The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.

Read More
Policy, Events, SAISJ Stone
MINT Lab Secures Grant for Sociotechnical AI Safety Research

The Machine Intelligence and Normative Theory (MINT) Lab has been awarded a US$480,000 grant from the Survival and Flourishing DAF (Donor Advised Fund). This gift will support research by the MINT lab into sociotechnical AI safety—the integration of multidisciplinary perspectives with technical research on mitigating direct risks caused by AI systems operating without immediate human supervision.

Read More
Grants, SAISJ Stone