News / April 2024

Response to NTIA Request for Comments on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights

On March 27, 2024, a group of researchers — affiliated with UC Berkeley — with expertise on AI development, safety, security, policy, and ethics submitted this formal response to the National Telecommunications and Information Administration (NTIA), in response to the Openness in AI Request for Comment (RFC) related to the Biden Administration’s Executive Order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which directed the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, to conduct a public consultation process and issue a report on the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to dual-use foundation models for which the model weights are widely available.

This submission follows a previous response to NTIA last year on AI Accountability as well as several responses to the National Institute of Standards and Technology (NIST) over the past two and a half years at various stages of NIST’s development of the AI Risk Management Framework (AI RMF) and follow-on work such as NIST’s Generative AI Public Working Group.


The debate about “open or closed” foundation models has become contentious in policy and technical communities, but there are middle ground approaches that can help to balance the benefits of openness with the risks from the proliferation of unsecured dual-use foundation models. We emphasize these approaches in our response. A recent survey of more than 1,000 Americans found similarly nuanced considerations, including interest in improving independent researcher access while recognizing risks of open sourcing powerful AI models.[1] 

NTIA has used the term “open foundation models” as a shorthand for the more specific term used in the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, “Dual-Use Foundation Models with Widely Available Model Weights.” However, we note the term “open” is often conflated with “open-source” and can overstate the true transparency of a release strategy while creating a sense of a false binary. In practice there is a spectrum of release methodologies between “open” and “closed”.[2] Throughout our response, we typically use the terms “unsecured” and “secured” instead of “open” and “closed” respectively, to refer specifically to the question of whether model weights have been made widely available (e.g. downloadable on a public repository).[3], [4]

Here are some of our key comments and recommendations on the NTIA Openness in Artificial Intelligence Models RFC:

  • AI models with widely available weights, or unsecured models, can provide important benefits such as enhanced privacy for intended model users, easier auditability, as well as a more widely accessible research and innovation ecosystem.

  • However, unsecured models also pose risks, such as various forms of malicious misuse resulting in harm, including to people’s rights and wellbeing and to the safety of the general public. Although both closed and open models can pose some such risks, unsecured models pose unique risks in that safety and ethical safeguards that were implemented by developers can be removed relatively easily from models with widely available weights (e.g., via fine tuning).[5] Knowing that, the developers of some open models do not attempt to implement safeguards in the first place. It is also impossible to ensure that critical security updates or other updates are effectively propagated to all instances of an open model.
    • Available evidence has demonstrated harmful impacts from misuse of unsecured models, in particular for child sexual abuse material (CSAM), non-consensual intimate imagery (NCII), disinformation at scale, facilitating cyberattacks, enabling online radicalization, and promoting harmful stereotypes and violence.[6] These risks are likely to continue to disproportionately harm women, minority groups, and vulnerable communities including children and the elderly.[7]
    • Some have argued that the marginal risk of open foundation models is relatively minimal in at least some cases[8]. The notion of “marginal risk” can be a helpful framing to ground the discussion of risks in a broader context. However, some of the most well-documented risks, such as the promotion of harmful stereotypes and violence, were not included in that particular study. We also expect the marginal risks of unsecured model release to grow, especially for the largest-scale and most broadly capable models – including dual-use foundation models as defined in Executive Order 14110 – that could eventually pose the greatest risks of enabling severe harms via malicious misuse. Capabilities of the largest-scale and most broadly capable “frontier” models have tended to increase with larger models and with larger quantities of data and compute used in model training, and we expect continued increases in each of those dimensions for frontier models. Thus, we expect increases in frontier-model capabilities (e.g., writing software code) that also would increase malicious-misuse hazards (e.g., writing malware).

  • In addition to investing in upstream protections to prevent a range of misuses of unsecured foundation models, we should invest in downstream protections to prevent specific misuses. However, we should not rely only on downstream protections.
    • For example, many advocate requiring U.S. mail-order gene synthesis labs to screen orders as a downstream protection to prevent the creation of bioweapon agents following malicious misuse of an unsecured model. However, it has long been recognized there are mail-order gene synthesis labs in China or elsewhere outside the United States that are much less likely than U.S. labs to follow standards for screening gene synthesis orders, which limits the value of mandatory screening of gene synthesis orders in the United States.[9]
    • Model developers are often the sole decision makers when it comes to choices about data, model design, evaluation, and mitigations. Developers of foundation models are also often highly resourced companies or organizations. Requirements for reasonable upstream protections provide accountability for the organizations in AI value chains that have the greatest power to reduce risks to the public from AI systems.

  • As part of managing the risks of unsecured model release without preventing the benefits, we recommend: Foundation model developers that plan to provide downloadable, fully open, or open source access to their models should first use a staged-release approach (e.g., not releasing parameter weights until after an initial secured or structured access release where no substantial risks or harms have emerged over a sufficient time period), and should not proceed to a final step of releasing model parameter weights until a sufficient level of confidence in risk management has been established, including for safety risks and risks of misuse and abuse.[10] The largest-scale or most capable models (including dual-use foundation models as defined in Executive Order 14110) should be given the greatest duration and depth of pre-release evaluations, as they are the most likely to have dangerous capabilities or vulnerabilities that can take some time to discover.[11] Structured access such as through APIs typically provides more opportunities than downloadable model weights allow, for mitigations such as content filters to prevent misuse, monitoring of usage to identify misuse, and system shutdown or rollbacks if problematic usage is identified. Foundation model developers that publicly release the model parameter weights for their foundation models with downloadable, fully open, or open-source access to their models, and other foundation model developers that suffer a leak of model weights, will in effect be unable to shut down or decommission AI systems that others build using those model weights. After model weights have been downloaded, the downloading actor can in turn make the model weights available to others, including via distribution channels that can be hard to monitor and harder or impossible to shut down. (See our response to RFC question 8a for additional details on these recommendations.)

  • We also recommend openness mechanisms that do not require making a model’s parameter weights downloadable. Many of the benefits of open-source systems, such as review and evaluation from a broader set of stakeholders, can be supported through transparency, engagement, and other openness mechanisms besides releasing model parameter weights. For example, independent researchers should be able to test and audit secured models more readily and with the protections of a legal safe harbor.[12] Other benefits of open source, such as making AI technologies more widely available, can also be achieved by providing less-resourced actors with cost-free or cost-subsidized access to secured foundation models. In addition to removing monetary barriers, freely-accessible secured models can often be more inclusive to less technologically sophisticated users. The availability of open-source models does not solve the many structural barriers, such as access to computing power and data, that many people, communities, and organizations face in participating in AI development.[13] (See our response to RFC question 8a for additional details on these recommendations.)

Download the full RFC submission for more details and additional comments from the researchers.

Response to NTIA Request for Comments on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights