Workshop on Sociotechnical AI Safety

The fall Workshop on Sociotechnical AI Safety at Stanford (hosted by Stanford's McCoy Family Center for Ethics in Society, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the MINT lab at the Australian National University), recently brought together AI Safety researchers and those focused on fairness, accountability, transparency, and ethics in AI. The event fostered fruitful discussions on inclusion in AI safety and complicating the conceptual landscape. Participants also identified promising future research directions in the field. A summary of the workshop can be found here, and a full report here.

Read More
Policy, Events, SAISJ Stone
MINT Lab Secures Grant for Sociotechnical AI Safety Research

The Machine Intelligence and Normative Theory (MINT) Lab has been awarded a US$480,000 grant from the Survival and Flourishing DAF (Donor Advised Fund). This gift will support research by the MINT lab into sociotechnical AI safety—the integration of multidisciplinary perspectives with technical research on mitigating direct risks caused by AI systems operating without immediate human supervision.

Read More
Grants, SAISJ Stone