AI Security Initiative

Analyzing Global Security Implications of Artificial Intelligence

Who We Are

Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security implications of AI.

The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.

AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future. We facilitate research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.

The Initiative’s long-term goal is to help communities around the world thrive with safe and responsible automation and machine intelligence. Download a PDF overview of the AI Security Initiative.

What We Do

The AI Security Initiative conducts independent research and engages with technology leaders and policymakers at state, national, and international levels, leveraging UC Berkeley’s premiere reputation and our SF Bay Area location near Silicon Valley. Our activities include conducting and funding technical and policy research, then translating that research into practice. We convene international stakeholders, hold policy briefings, publish white papers and op-eds, and engage with leading partner organizations in AI safety, governance, and ethics.

Our research agenda focuses on the key decision points that will have the greatest impact on the future trajectory of AI security, including decisions about how AI systems are designed, bought, and deployed. These decisions will affect everything from AI standards and norms to global power dynamics and the changing nature of warfare. Our research addresses three key challenges: vulnerabilities, misuse, and power.

Vulnerabilities, Misuse, and Power Graphic

Click to expand
 

Research and Media


Actionable Guidance for High-Consequence AI Risk Management

Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

University of California Presidential Working Group on AI Final Report

 

 

 

 

 

 

 

 

 

 

 


NIST’s AI Risk Management Framework Should Address Key Societal-Scale Risks

AI & Cybersecurity: Balancing Innovation, Execution & Risk

Guidance for the Development of AI Risk and Impact Assessments

 

 

 

 

 

 

 

 

 

 


Now is the Time for Transatlantic Cooperation on Artificial Intelligence
Explainability Won’t Save AI
Designing Risk Communications: A Roadmap for Digital Platforms

 

 

 

 

 

 

 

 

 

 

AI at the Borderlands
Government AI Readiness Index 2020 Report
Government AI Readiness Index 2020
The Flight to Safety-Critical AI report cover
The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry

 

 

 

 

 

 

 

 

 

 

ML Failures
ML Failures Labs
AI Principles in Context
AI Principles in Context
Pandemic is showing us we need safe and ethical AI more than ever
Pandemic is showing us we need safe and ethical AI more than ever

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The World Isn’t Ready for AI to Upend the Global Economy
The World Isn’t Ready for AI to Upend the Global Economy

 

 

 

 

 

 

 

 

 

 

 

 

 

Events and Announcements

June 2022: Can Documentation Improve Accountability for Artificial Intelligence?

An online panel discussion on the current state of AI documentation, how far the AI community has come in adopting these practices, and new ideas to support trustworthy AI well into the future. Learn more.

July 2021: AI Language Models: Mitigating Harms Through Responsible Research and Publication

Distinguished speakers Carolyn Ashurst, Senior Research Associate in Safe and Ethical AI at the Alan Turing Institute, Rosie Campbell, Technical Program Manager at OpenAI, and Zeerak Waseem, PhD candidate at the University of Sheffield, share their perspectives from the front lines of AI research and development. Learn more.

March 2021: Call for Graduate Student Researchers: Global Governance and Security Implications of Artificial Intelligence

The AI Security Initiative seeks applications for Graduate Student Researcher positions to work within AISI for limited-term appointments for Summer 2021. Learn more.

October 27, 2020: AI Race(s) to the Bottom? Consequences of Competitive AI Development Across Industries

AISI and AI policy experts will discuss when and where AI “races to the bottom” might be more or less harmful, and the surprising ways that specific industries are approaching AI development more cautiously and cooperatively. Learn more.

July 2020: AISI, CITRIS Policy Lab Collaboration with California Department of Technology

AISI, in partnership with the CITRIS Policy Lab, launched a year-long collaboration with the California Department of Technology to conduct an analysis of AI-enabled tools in select state departments and develop statewide policy recommendations to inform the procurement, development, implementation, and monitoring of such tools in the public sector. Learn more.

February 2020: AISI Speaker Seminar – “Veridical Data Science” featuring Professor Bin Yu

In this seminar, Professor Yu presented her latest work focusing on a predictability, computability, and stability (PCS) framework, which aims to provide responsible, reliable, reproducible, and transparent results across the entire data science life cycle. Learn more.

November 2019: “Human Compatible: AI and the Problem of Control” with Professor Stuart Russell

AISI and the Center for Human-Compatible Artificial Intelligence (CHAI) co-presented a book talk featuring Stuart Russell, author of Human Compatible: Artificial Intelligence and the Problem of Control. Learn more.