On February 24, 2023, a group of researchers — affiliated with UC Berkeley — with expertise on AI research and development, safety, security, policy, and ethics submitted this formal response to the National Institute of Standards and Technology (NIST), in response to the NIST AI Risk Management Framework (AI RMF) Full Draft Playbook, Roadmap, and Crosswalks released in January 2023. The researchers previously previously submitted responses to NIST in September 2021 on the NIST AI RMF Request For Information (RFI), in January 2022 on the AI RMF Concept Paper, in April 2022 on the AI RMF Initial Draft, and in September 2022 on the AI RMF 2nd Draft and Initial Draft Playbook.
Here is a high level summary of some of our key comments and recommendations on the January 2023 Full Draft Playbook and Roadmap:
- Ensure consistency in the evaluation of both the likelihood and magnitude of identified impacts throughout the mapping function.
- Provide examples of potentially unacceptable risks from the main AI RMF 1.0 guidance document in the Playbook.
- Encourage consideration of the potential for unintended consequences from failures of system objectives specification.
- Enhance the utility of the Playbook by adding publicly available tools and resources to each subcategory.
- Provide examples to help organizations consider potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet.
- Encourage organizations to establish policies and practices to inform users (and allow them to opt out) if they are interacting with an AI system or if a decision that impacts them was made by an AI system.
- Encourage organizations to establish policies and practices to provide recourse or redress to people who experience negative impacts related to the use of an AI system.
The entire reponse, including details and additional comments, is available as a PDF download below.