News / September 2022

Response to NIST AI RMF Second Draft and Initial Playbook

On September 28, 2022, a group of researchers – affiliated with centers at the University of California, Berkeley – with expertise in AI research and development, safety, security, policy, and ethics submitted this formal response to the National Institute of Standards and Technology (NIST) in response to the Second Draft of the NIST AI Risk Management Framework (AI RMF) and accompanying Initial Draft Playbook. The researchers previously submitted responses to NIST in September 2021 on the NIST AI RMF Request For Information (RFI), in January 2022 on the AI RMF Concept Paper, and in April 2022 on the AI RMF Initial Draft.

In this submission, the researchers provide in-depth comments, first regarding the topics/questions posed below by NIST in the AI RMF Initial Draft, and then on specific passages in the NIST AI RMF Initial Draft.

Here are the researchers’ key high-level comments and recommendations on the AI RMF Second Draft:

  1. We agree with NIST’s statements in Section 1.1 and Appendix B of the AI RMF 2nd Draft that AI innovations have great potential for benefits across society, but that AI systems also can present risks requiring particular approaches and considerations, such as to address emergent properties of AI systems and potential for unintended consequences at both an individual and societal scale. We also agree with NIST’s statements in Section 1.2 that the AI RMF should help organizations to manage those risks by applying the AI RMF, especially from the beginning of an AI system’s lifecycle, with aims of reducing the likelihood and magnitude of negative impacts (and increasing the benefits) to individuals, groups, communities, organizations, and society.
    • We recommend NIST keep these statements in the AI RMF. These passages highlight distinctive opportunities and risks for AI, and ways in which the AI RMF can help organizations address those risks effectively.
  2. We agree with NIST’s statements in Section 3.2.2 of the AI RMF 2nd Draft that although the AI RMF “does not prescribe risk tolerance”, the AI RMF can be used to prioritize risks and determine which risks “call for the most urgent prioritization and most thorough risk management process.” We also agree that “In some cases where an AI system presents the highest risk – where negative impacts are imminent, severe harms are actually occurring, or catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently mitigated. Conversely, the lowest-risk AI systems and contexts suggest lower prioritization.”
    • We recommend NIST keep these statements in the AI RMF. We understand that specifics of risk tolerance will depend on particular contexts including regulatory considerations. We also believe there is broad agreement on the importance of prioritizing the highest risks to individuals, groups, communities, organizations, and society, and that these include cases “where negative impacts are imminent, severe harms are actually occurring, or catastrophic risks are present”. Moreover, there is precedent for NIST framework guidance prompting identification of risks with potentially catastrophic impacts: the NIST Cybersecurity Framework guidance on risk assessment points to NIST SP 800-53 RA-3, which in turn references NIST SP 800-30; the impact assessment scale in Table H-3 of SP 800-30 includes criteria for rating an expected impact as a “catastrophic adverse effect” to individuals, organizations, or a society.
  3. We recommend refinement of several aspects of the Map, Measure and Manage functions.
    • For the Map function, we recommend NIST clarify that in addition to describing intended beneficial “use cases” for an AI system as part of Map activities, it is valuable for Map activities to include identification of other potentially beneficial uses of an AI system, as well as negative “misuse/abuse cases”. This would better address both positive and adverse risks of reasonably foreseeable “off label” uses, beyond an AI developer’s or deployer’s originally intended uses of an AI system. Identification of other potentially beneficial uses should be a clearer part of Map 1.1 on system-use understanding and documentation, and possibly also Map 5.1 on impact identification. Misuse/abuse case identification should be a clearer part of Map 5.1 on impact identification, and possibly also Map 1.1 on system-use understanding and documentation and Measure 2.7 on AI system resilience and security evaluation. We believe it is generally worthwhile to identify reasonably foreseeable uses and misuses of AI systems as part of risk management.
    • For Measure 1.1 on measurement of risks and Manage 1.3 on responses to risks, we recommend NIST revise their definitions from addressing the “most significant risks” to a broader set of “identified risks”. The equivalent to Manage 1.3 in the AI RMF Initial Draft stated, “Responses to enumerated risks are identified and planned. Responses can include mitigating, transferring or sharing, avoiding, or accepting AI risks.” However, Manage 1.3 in the AI RMF 2nd Draft stated, “Responses to the most significant risks, identified by the Map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, sharing, avoiding, or accepting.” Measure 1.1 was similarly changed from recommending measurement of enumerated risks to only recommending measurement of the most significant risks. The change from “enumerated risks” to “the most significant risks” could be counterproductive. It may be that organizations decide to monitor and track, or simply accept, many identified risks that are not the most significant risks. However, in some cases it may be cost-effective and worthwhile to mitigate multiple risks that are not the most significant risks. It seems prudent for organizations to choose how to address all identified risks, rather than to simply ignore identified risks that are not deemed the most significant risks.

We also commend NIST on the many improvements between the AI RMF Initial Draft and the 2nd Draft, including the following:

  • Removing the previous, confusing split between “technical characteristics”, “socio-technical characteristics”, and “principles” of trustworthy AI, and instead confirming that all of them require and involve human judgment, and providing guidance for addressing them
  • Clarification of various AI system stakeholders
  • Discussion of how AI risks differ from traditional software risks, including the higher degree of difficulty in predicting failure modes for emergent properties of large-scale pre-trained models
  • Adding environmental and ecosystem harms to the list of examples of potential harms under “harm to a system”
  • Addition of documentation items throughout
  • Addressing third-party and supply chain risks as part of each function
  • Addition of test, evaluation, verification, and validation (TEVV) activities throughout, including to monitor and assess risks of emergent properties of AI systems

In the full submission, the researchers provide additional detailed comments, first regarding the questions posed by NIST in the AI RMF Initial Draft Playbook, and then on specific passages in the NIST AI RMF 2nd Draft and Initial Draft Playbook.

Response to NIST AI RMF 2nd Draft and Initial Playbook