AI Policy Hub

AI Policy Hub

Advancing interdisciplinary research to anticipate and address AI policy opportunities

About

The AI Policy Hub is an interdisciplinary initiative training UC Berkeley researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future.

We support annual cohorts of six outstanding UC Berkeley graduate students who conduct innovative research and produce policy deliverables that help reduce the harmful effects and amplify the benefits of artificial intelligence.

Our mission is to cultivate an interdisciplinary research community to anticipate and address policy opportunities for safe and beneficial AI. 

Our vision is a future in which AI technologies do not exacerbate division, harm, violence, and inequity, but instead foster human connection and societal well-being.

We are housed at the AI Security Initiative, part of the University of California, Berkeley’s Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).

We also collaborate with other UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology (BCLT), the College of Engineering, and the Goldman School of Public Policy.

Interested in applying?

Applications for the UC Berkeley AI Policy Hub for the Fall 2024 – Spring 2025 cohort are now closed. UC Berkeley students actively enrolled in graduate degree programs (Master’s and PhD students) from all departments and disciplines are encouraged to apply for the next cohort. Stay tuned for the next application period in Spring 2025!


Meet the Fall ’24 – Spring ’25 Cohort!

Syomantak Chaudhuri

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Heterogeneous Differential Privacy for Users

In today’s digital landscape, the invasive nature of online data collection poses significant challenges to user privacy. Using the framework of Differential Privacy for privacy analysis, we focus on providing individual users with their desired level of privacy while bridging the gap between user expectations and industry practices.

Jaiden Fairoze

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Cryptographic Tools for Misinformation Prevention

As generative AI tools become more accessible and the quality of their output continues to improve, it is crucial to implement mechanisms that can reliably answer the question: “Is this content AI-generated or human-created?” His research leverages cryptography to provide reliable solutions to AI traceability problems, with a particular focus on ensuring strong authenticity guarantees.

Mengyu (Ruby) Han

Master of Public Policy Candidate, Goldman School of Public Policy
LinkedIn

Research Focus: International Coordination on AI

Given the risks of generative AI being easily exploitable to exacerbate international adversarial relations, this project investigates how leading countries can cooperate with each other to create a more transparent regulatory environment and support international peacekeeping.

Audrey Mitchell

J.D. Candidate, School of Law
LinkedIn

Research Focus: Adapting Legal Rules

Current legal rules do not provide adequate safeguards for AI use during legal proceedings. This project seeks to research three rule structures: The Federal Rules of Evidence, The Federal Rules of Civil Procedure, and Judge-specific standing orders – to analyze how they have been creatively utilized so far to respond to the new challenges that AI brings, and identify their shortcomings in practice. The goal of this project is to operate within the existing legal rule structures while advocating for consistent, fair, and reliable amendments to those structures in order to provide codified protections for litigants that may be impacted by generative AI.

Ezinne Nwankwo

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Data Supported Street Outreach

Government agencies are increasingly adopting data-driven tools to predict critical life outcomes (i.e. homelessness) and allocate societal resources to those at risk and/or in need. However, local case workers are trying to provide services in realtime to the communities they work with, often leading to tensions between these stakeholders. This project will map out the current landscape of AI/ML and homelessness services, with a focus on the needs and perspectives of a local nonprofit agency, on the ground street outreach workers, and the communities they work with. We seek to understand how the goals, principles, and patterns of ML research and social services to date have aligned with key stakeholders in the full pipeline of homelessness services. The project will provide guidance for future research at the intersection of AI and homelessness and best practices that will foster successful collaborations amongst AI researchers, non-profits, and government agencies who hope to use AI to provide equitable futures for vulnerable communities.

Laura Pathak

PhD Student, Social Welfare

Research Focus: GenAI for Health and Human Services

Integrating GenAI to streamline health and human service delivery has the potential to improve care quality, widen access, and reduce costs while simultaneously ameliorating deep-seated service inequities impacting racial and ethnic minorities, youth, immigrants, people of low socioeconomic status, and those living in rural areas. This project will map and analyze the current landscape of global participatory policies, frameworks, structures, and initiatives that (a) harness public participation in AI/ML design, adoption, and oversight; and (b) are relevant to future sensitive uses of GenAI in health and human services. This study aims to provide policy recommendations for creating effective and meaningful public participation structures for GenAI accountability in U.S. public services.

Previous AI Policy Hub Fellows

Contact

If you are a UC Berkeley student with inquiries about the application, or a faculty member or researcher in the field interested in collaboration or providing student mentorship, please contact Jessica Newman at jessica.newman@berkeley.edu. For media inquiries, please contact Charles Kapelke at ckapelke@berkeley.edu. Interested in supporting our work philanthropically? Shanti Corrigan at shanti@berkeley.edu can gladly facilitate introductions across our team of experts and clarify the impact gifts of all sizes can make to advance our mission.