On the Marginal Risk of Open Foundation Models

One of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI.

Open foundation models, defined here as models with widely available weights, enable greater customization and deeper inspection. However, their downstream use cannot be monitored or moderated. As a result, risks relating to biosecurity, cybersecurity, disinformation, and non-consensual deepfakes have prompted pushback.

We analyze the benefits and risks of open foundation models. In particular, we present a framework to assess their marginal risk compared to closed models or existing technology. The framework helps explain why the marginal risk is low in some cases, clarifies disagreements in past studies by revealing the different assumptions about risk, and can help foster more constructive debate going forward.


Led by Rishi Bommasani and Sayash Kapoor, this paper brings together a number of authors from across AI research and policy. You can read the paper here: https://crfm.stanford.edu/open-fms/