News / March 2022

Recommendations to OSTP on the National Artificial Intelligence Research and Development Strategic Plan

On March 4, 2022, a group of professors and researchers – affiliated with centers at the University of California, Berkeley as well as external technology and governance non-profit research organizations – with expertise in AI research and development, policy, and ethics submitted this formal response to the Office of Science and Technology Policy’s (OTSP) request for information to the update of the National Artificial Intelligence Research and Development Strategic Plan.

In this submission, the researchers affirm the continued importance of the eight strategic aims described in the 2019 Update. However, they advocate for modest changes to each aim that take into account the continued learning across the AI R&D landscape. Lastly, they advocate for the inclusion of a ninth strategy—one that draws attention to the need for research on transparency and documentation of AI systems and applications. The researchers believe this strategy is a necessary addition to support responsible and sustainable advances in this technology. These recommendations are intended to help ensure the National AI R&D Strategic Plan enables sustained technological innovation, supports broad inclusion, economic prosperity, and national security, and upholds essential democratic values.

A one-sentence summary of the main recommendation for each strategy is listed below:

  • We encourage a strengthened focus on multidisciplinary research that supports AI robustness, ethics, transparency, and security integrated with long-term investments in fundamental research.
  • We encourage greater focus on assessing the appropriateness of varying human-machine teaming arrangements and on understanding the associated human labor implications.
  • We encourage strengthened research and transparency in the integration of ethical, legal, and societal concerns throughout all stages of the AI lifecycle, as well as on the detection of malicious uses of AI including potential human rights abuses.
  • We encourage strengthened research on how to manage and prevent safety and security challenges from increasing as AI systems become more advanced and multiply their capabilities, including the role of greater transparency and public awareness.
  • We encourage research on how to reduce energy and carbon footprints for AI development and operation, and the role of public training and testing environments in that reduction.
  • We encourage research that investigates how standards, benchmarks, and testing requirements for a broad set of quality controls will inform evolving AI development and deployment, and how to encourage adoption.
  • We emphasize the need to not only broaden participation in computing and engineering fields, but also to provide educational opportunities to train computer scientists and engineers to be fluent in social and ethical impact, and in professional responsibility.
  • We encourage increased focus on international cooperation and coordination on AI research as well as support for research partnerships that include civil society and impacted communities.
  • We encourage support for research that identifies effective mechanisms for transparency and documentation of AI systems and applications.

Download the full comment