Event Recap / December 2023

Recap: Launch Event for AI Risk Management Standards Profile v 1.0

On November 8, CLTC’s Artificial Intelligence Security Initiative (AISI) and the CITRIS Policy Lab co-hosted an online panel to mark the release of Version 1.0 of the UC Berkeley AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models, a resource developed by a team of UC Berkeley researchers to help developers manage risks related to large language models (LLMs) and other “general-purpose AI systems” (GPAI or GPAIS). Such models have potential to provide a wide range of benefits, but they also can result in adverse events with profound consequences. The Profile was authored by a team of researchers from UC Berkeley to help organizations use AI risk management standards like the NIST AI Risk Management Framework and ISO/IEC 23894.

“There are so many important initiatives happening [related to AI policy] in the US and abroad,” said Jessica Newman, Director of the AI Security Initiative and Co-Director of the AI Policy Hub (and a co-author of the Profile). “We’re excited to help connect the dots today between this Profile and the broader context and environment of policy and governance efforts for cutting-edge AI systems.”

Anthony (Tony) Barrett, the lead author of the report, explained that the Profile is structured around the NIST AI Risk Management Framework’s four core functions: “govern,” related to roles and responsibilities for risk management processes; “map,” for identifying risks and context; “measure” for measuring risk and trustworthiness metrics; and “management,” which relates to risk management decisions and risk mitigation controls.

“Profiles such as ours can provide supplemental guidance for specific AI technologies or use cases,” Barrett said. “When we talk about risk in the Profile, and when we talk about impacts, we’re talking about at least three dimensions of risks or potential impacts, and they are addressed in at least three different areas in the Profile.”

Barrett provided a summary of some of the key recommendations detailed in the Profile, including that developers should use a structured approach to releasing a model to be able to monitor how people are using it and make changes if necessary. “If unexpected harms or misuses or other emergent properties of the system begin to show up, then you can make corresponding changes once you’ve released the model,” Barrett said.

Krystal Jackson, a non-resident research fellow with the AISI who helped develop the Profile, explained that the researchers conducted a “feasibility test,” applying the guidance in the Profile to assess multiple large-scale foundation models. “We did this by leveraging publicly available information such as technical reports, model cards, blog posts, and documented assessments,” Jackson said. “This allowed us to test the feasibility of our recommended guidance against the risk management actions that were already being performed by developers to ensure that our guidance was both applicable and reasonable…. And it provided some illustrative examples of how one would apply our guidance to a real-world model…. Overall, this testing really demonstrates that our profile guidance was largely applicable and feasible for model developers to apply for their future risk assessments. 

Barrett invited the audience to provide feedback to inform future improvements to the Profile. “We intend this as a contribution to standards in this area,” Barrett said. “Widespread norms and standards for using practices such as in this Profile can help ensure that developers of foundation models and similar systems can be competitive without compromising on practices for AI safety, security, accountability, and related issues. We’re planning on at least one update over the next year…. We’d love to get your feedback, and we’ll consider that as we work on refinements for future versions.”

A Conversation on AI Safety for GPAIS


The panel next turned to a discussion led by Brandie Nonnecke, Director of the CITRIS Policy Lab and Co-Director of the AI Policy Hub, who co-authored the Profile. Nonnecke asked the panelists to explain why risk management resources are necessary for general purpose AI systems and foundation models.

Apostol Vassilev, an AI and cybersecurity researcher with the National Institute of Standards and Technology (NIST), explained that foundation models are poised “to impact many aspects of business and life of people and society,” but that “many of the models’ capabilities and vulnerabilities are not fully understood and characterized. This creates potential risks for misuse, abuse, and adverse impacts on people, businesses, and society. Left unmanaged, these risks can lead to harmful outcomes and rejection of the technology, thus undermining the growth potential it has for improving the quality of our lives.”

Sabrina Küspert, a Seconded Policy Expert from the Mercator Foundation, noted that ensuring the safety of foundational models is particularly important because so many other applications are built upon them. “If you look at the scale, that makes it quite unique,” Küspert said. “This risk management profile allows for informed debate about the severity and the likelihood of risks.”

Ian Eisenberg, who leads AI governance research at Credo AI, a developer of software used for assessing the safety of AI products, explained that the rapid pace of evolution in AI technologies “leads to humongous uncertainties that require additional focus.” He agreed that foundation models can be thought of as “infrastructure,” and that we need safeguards just as we have safety measures for other public infrastructure.

Christabel Randolph, a Public Interest Technology Fellow at the Georgetown University Law Center, stressed that we should avoid “tech exceptionalism,” and ensure that AI technologies are regulated as much as pharmaceuticals or other products that are put into the market. “My question is not, why should we have risk management for AI systems; my question is, why shouldn’t we have it? ” she said. “We talk about national security and biosecurity risks, but from a very basic level, we see that these products and services and tools are going to be integrated into our daily lives, in classrooms, in healthcare, or in making credit decisions. That’s why we need to have appropriate risk management and governance frameworks.”

Nonnecke noted that “legislatures and governments around the world are waking up to the fact that we slept on regulating platforms for the past 20 years,” and asked the panelists to weigh in on the highest-priority risk-management actions or mitigations that we can implement for these general purpose AI and foundation models.

Eisenberg noted the importance of evaluation and transparent reporting. “One of the things that comes out in this Profile many times is the complexity of the value chain and the need for different actors to get information from others who have better information about other aspects of the system,” he said. “Transparent reporting is the only thing that will allow us to have that information flow effectively through the value chain.”

“We need to understand the risks, and we also need to understand incidents,” Küspert added, noting the need for centralized resources for sharing information about models. “The greater the AI model’s capabilities and the attendant risks, the greater is the responsibility.” 

“Foundation models are evolving fast, but so is science about them, and science tells us the limitations of certain favorite mitigation techniques around risk management,” Vasilev said. “Where are the boundaries? What is possible, what is not? Having this knowledge helps us refine our standards or guidance, and that’s very valuable. I welcome the overall initiative to provide more research into this space, and the academic community is really doing a great job of that. We need to incorporate this newly emerging scientific knowledge into standards and best practices.”

Watch the full panel above or on YouTube.