June 1, 2022

Event Recap: “Can Documentation Improve Accountability for Artificial Intelligence?”

 Diverse processes and practices have been developed in recent years for documenting artificial intelligence (AI), with a shared aim to improve transparency, safety, fairness, and accountability for the development and uses of AI systems. On May 25, 2022, CLTC’s AI Security Initiative presented an online panel discussion on the…

May 2, 2022

Recommendations to NIST on the AI Risk Management Framework Initial Draft

On April 28, 2022, a group of researchers – affiliated with centers at the University of California, Berkeley – with expertise in AI research and development, safety, security, policy, and ethics submitted this formal response to the National Institute of Standards and Technology (NIST) in response to the Initial Draft…

April 4, 2022

AI Policy Hub Now Accepting Applications

The UC Berkeley AI Policy Hub is now accepting applications for its inaugural Fall 2022 – Spring 2023 cohort APPLY HERE Applications due by: Tuesday, April 26 at 11:59 PM (PDT)   What are the benefits of the program to participants? Participants of the AI Policy Hub will have the…

March 10, 2022

UC Berkeley Launches AI Policy Hub

BERKELEY, CA — Two prominent research centers at the University of California, Berkeley have joined together to launch the AI Policy Hub, an interdisciplinary initiative training forward-thinking researchers to develop effective governance and policy frameworks to guide AI, today and into the future. The AI Policy Hub will support cohorts…

February 8, 2022

New CLTC White Paper Proposes “Reward Reports” for Reinforcement Learning Systems

“Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems,” a new report by a team of researchers affiliated with the UC Berkeley Center for Long-Term Cybersecurity’s Artificial Intelligence Security Initiative (AISI), examines potential benefits and challenges related to reinforcement learning, and provides recommendations to help policymakers ensure that RL-based systems are deployed safely and responsibly.

January 25, 2022

Response to NIST AI Risk Management Framework Concept Paper

On January 25, 2022, a group of researchers affiliated with UC Berkeley responded to NIST’s request for comments on its Concept Paper for an AI Risk Management Framework. The entire response is available as a PDF download here.

October 28, 2021

CLTC’s Jessica Newman Supports the University of California in Developing a Responsible AI Strategy

A new report from the University of California Office of the President provides recommendations for how the 10-campus UC System can prepare for and mitigate the potential harms of artificial intelligence. CLTC’s Jessica Newman served on the UC Presidential Working Group on Artificial Intelligence, which outlines a a set of responsible principles to promote the safe and ethical development, use, procurement, and monitoring of AI across the university.

August 9, 2021

Guidance for the Development of AI Risk and Impact Assessments

A new report from the Center for Long-Term Cybersecurity provides a set of recommendations to help governments and other organizations evaluate the potential risks and harms associated with new artificial intelligence (AI) technologies. The paper, Guidance for the Development of AI Risk and Impact Assessments, by Louis Au Yeung, a…

August 3, 2021

AI Language Models: Mitigating Harms Through Responsible Research and Publication

 Artificial intelligence-based language models have advanced significantly in recent years, and are now capable of writing and speaking in ways that appear shockingly human. Yet for all their potential benefits, these technologies have myriad associated costs, and there is growing urgency within the AI community to address these issues,…

March 18, 2021

Call for Graduate Student Researchers: Global Governance and Security Implications of Artificial Intelligence

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) invites applications for Graduate Student Researcher positions to work within the CLTC AI Security Initiative for limited-term appointments for Summer 2021. The accepted applicant(s) will have the opportunity to engage with CLTC staff and network and contribute to a growing hub for interdisciplinary research on the global governance and security implications of artificial intelligence. Opportunities will vary based upon the skills and interests of the applicant. The Initiative is interdisciplinary and applicants from all departments, including PhD, masters, and law students, are encouraged to apply.

November 3, 2020

AI Race(s) to the Bottom? A Panel Discussion

Countries and corporations around the world are vying for leadership in AI development and use, prompting widespread discussions of an “AI arms race” or “race to the bottom” in AI safety. But the competitive development of AI will take place across multiple industries and among very different sets of actors,…

May 5, 2020

New CLTC Report: “Decision Points in AI Governance”

The Center for Long-Term Cybersecurity (CLTC) has issued a new report that takes an in-depth look at recent efforts to translate artificial intelligence (AI) principles into practice. The report, “Decision Points in AI Governance,” authored by CLTC Research Fellow and AI Security Initiative (AISI) Program Lead Jessica Cussins Newman, provides an overview of 35 efforts already under way to implement AI principles, ranging from tools and frameworks to standards and initiatives that can be applied at different stages of the AI development pipeline.

March 9, 2020

Video: CLTC Seminar on “Veridical Data Science”

On February 19, 2020 the AI Security Initiative at the Center for Long-Term Cybersecurity (CLTC) hosted a lunchtime seminar featuring Bin Yu, Chancellor’s Professor in the Departments of Statistics and Electrical Engineering & Computer Science at UC Berkeley. CLTC’s AI Security Initiative (AISI) works across technical, institutional, and policy domains…