Research Library

Since 2015, CLTC has directly funded more than 160 projects by UC Berkeley students, faculty & affiliates on original cybersecurity research topics, in addition to developing our own white papers, publications, blogs, policy analysis and recommendations, and open-source curricula and toolkits.

Filter & Sort



Research Item


March 13, 2024



By: Ravi Patnala, Mayank Saxena, Meenakshi Sriraman

BreachProphet aims to address critical gaps in current cybersecurity solutions, particularly the lack of an effective mechanism to map and predict risks stemming from vulnerabilities for businesses by…

January 23, 2024

White Paper

cover image of the report, featuring symbols like dollar signs, padlocks, and the SEC

Representing Privacy Legislation as Business Risks

By: Andrew Chong, Richmond Wong

For this CLTC white paper, researchers Richmond Wong and Andrew Chong used Form 10-K documents — annual regulatory reports for investors that publicly traded companies must file with the U.S. Securities and Exchange Commission (SEC) — to analyze how nine major technology companies assess and integrate the business risks of privacy regulation like the EU’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA).

December 5, 2023

White Paper

a network of boxes padlocks and other shapes

Cybersecurity Futures 2030: New Foundations

To better understand how diverse forces are shaping the future of cybersecurity for governments and organizations, the Center for Long-Term Cybersecurity (CLTC), the World Economic Forum Centre for Cybersecurity, and CNA’s Institute for Public Research collaborated on “Cybersecurity Futures 2030: New Foundations,” a foresight-focused research initiative that aims to inform cybersecurity strategic plans around the globe.

November 8, 2023

White Paper

a steam-driven tool

AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models

By: Anthony Barrett, Jessica Newman, Brandie Nonnecke

Increasingly general-purpose AI systems, such as BERT, CLIP, GPT-4, DALL-E 2, and PaLM, can provide many beneficial capabilities, but they also introduce risks of adverse events with societal-scale consequences. This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of such AI systems. The document is intended primarily for developers of these AI systems; others that can benefit from this guidance include downstream developers of end-use applications that build on a general-purpose AI system platform. This document facilitates conformity with leading AI risk management standards and frameworks, adapting and building on the generic voluntary guidance in the NIST AI RMF and ISO/IEC 23894 AI risk management standard, with a focus on the unique issues faced by developers of increasingly general-purpose AI systems.

October 19, 2023


LLM Canary Open-Source Security Benchmark Tool

By: Jamie Cohen, Jackson Gor, Rona Michele Spiegel, Peter Steinhoff, Jennifer Yonemitsu

Generative AI is rapidly expanding and poised to revolutionize multiple industries. The surge in adoption has led to an increased use of pre-trained Large Language Models (LLMs), but…