AI Policy Hub

AI Policy Hub

Advancing interdisciplinary research to anticipate and address AI policy opportunities

About

The AI Policy Hub is an interdisciplinary initiative training UC Berkeley researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future.

We support annual cohorts of six outstanding UC Berkeley graduate students who conduct innovative research and produce policy deliverables that help reduce the harmful effects and amplify the benefits of artificial intelligence.

Our mission is to cultivate an interdisciplinary research community to anticipate and address policy opportunities for safe and beneficial AI. 

Our vision is a future in which AI technologies do not exacerbate division, harm, violence, and inequity, but instead foster human connection and societal well-being.

We are housed at the AI Security Initiative, part of the University of California, Berkeley’s Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).

We also collaborate with other UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology (BCLT), the College of Engineering, and the Goldman School of Public Policy.

Interested in applying?

If you are a UC Berkeley graduate student interested in applying for the 2024 – 2025 AI Policy Hub Fellowship, please refer back to this webpage in Spring 2024 for further information.


Meet the Fall ’23 – Spring ’24 Cohort!

Marwa Abdulhai

PhD Student, Electrical Engineering and Computer Sciences, Berkeley Artificial Intelligence Research Lab (BAIR)
LinkedIn

Research Focus: Deception in AI Systems

Marwa Abdulhai is an AI PhD student at UC Berkeley advised by Sergey Levine. Her work explores how machine learning systems that directly communicate with or interact with humans—such as language models, dialogue systems, and recommendation systems—have led to wide-scale deceit and manipulation. Her work explores the concepts of deceit and manipulation in these systems and to build reinforcement learning algorithms with reward terms that prevent certain kinds of deception and align with human values.

Jessica Dai

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Fairness Without Categories: Enabling Collective Action for Algorithmic Accountability

Jessica Dai is a PhD student in EECS at UC Berkeley advised by Ben Recht and Nika Haghtalab. This project, in collaboration with Deb Raji, explores designing a framework for the general public to report and contest large-scale harms from algorithmic decision making systems (ADS). Most individuals have little control over when ADS are used, much less any ability to affect the design and oversight of its use. The work is particularly focused on ways to empower users to identify systematic patterns of mistakes made by an algorithm on groups of people that may not have been identified a priori, and that emerge only after a period of deployment.

Ritwik Gupta

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Computer Vision for Humanitarian Assistance and Disaster Response (HADR)

Ritwik Gupta is an AI PhD student at UC Berkeley advised by Shankar Sastry, Trevor Darrell, and Janet Napolitano. His research aims to create computer vision methods that help first responders make better sense of a chaotic and unpredictable world, thus making aid provision more effective and efficient. Ritwik’s work is also focused on strengthening dual-use applications by translating advances in machine learning for HADR to new domains, such as addressing broader national security challenges.

Christian Ikeokwu

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Generative AI Safety through Meta-Principles

Christian Ikeokwu is a PhD student in EECS advised by Christian Borgs and Jennifer Chayes. His work focuses on the risk that users may intentionally or unintentionally bypass the safety mechanisms of generative AI models, leading to unintended and potentially harmful outcomes. His project is helping develop algorithms to teach AI general safety “”meta-principles”” that it can apply in specific contexts so that AI safety mechanisms are able to generalize to inputs that are vastly different from the distribution it was initially trained with.

Janiya Peters

PhD Student, School of Information
LinkedIn

Research Focus: Resistance to Text-to-Image Generators in Creator Communities

Janiya Peters is a PhD student at the UC Berkeley School of Information advised by Deirdre Mulligan. Her work explores the ways in which text-to-image models compromise visual creators’ intellectual property rights, as well as how visual creators adopt strategies of resistance to retain agency over their intellectual property, labor and compensation. Her project identifies sites of dispute between stakeholders, and discerns individual and collective action towards repossessing appropriated works. This project will inform policy at the intersection of copyright, data labor and creative expression.

Guru Vamsi Policharla

PhD Student, Electrical Engineering and Computer Sciences
LinkedIn

Research Focus: Zero Knowledge Proofs for Machine Learning

Guru Vamsi Policharla is a Computer Science PhD student at UC Berkeley advised by Sanjam Garg. Guru’s project focuses on the potential of Cryptographic Proofs of Training to produce publicly verifiable proofs that an AI system is robust, fair, valid, and reliable without compromising the privacy of the underlying dataset or the machine learning model. This tool can be used to support accountability from companies deploying AI, especially those that limit public access on their training procedures and datasets by citing privacy issues and intellectual property protection.

Fall ’22 – Spring ’23 Cohort

Alexander Asemota

Alexander Asemota

PhD Student, Statistics, Division of Computing, Data Science, and Society
LinkedIn

Alex Asemota is a third year PhD student in the statistics department advised by Giles Hooker. His research focuses on explainability in machine learning, and currently he is developing counterfactual methods that are useful for practitioners in industry. A graduate of Howard University, Alex was awarded a Chancellor’s Fellowship during the first two years of his PhD training at UC Berkeley.

Research Focus: Development of realistic metrics for counterfactual explanations in AI.

Micah Carroll

Micah Carroll

PhD Student, Electrical Engineering and Computer Sciences
GitHub | @MicahCarroll

Micah Carroll is an Artificial Intelligence PhD student at UC Berkeley advised by Anca Dragan and Stuart Russell. Originally from Italy, Micah graduated with a Bachelor’s in Statistics from Berkeley in 2019. His research interests lie in human-AI systems: in particular the effects of social media on users and society, and making AIs better at complementing and collaborating with humans.

Research Focus:  Identification of manipulation incentives in recommender systems that maximize long-term engagement

Angela Jin

Angela Jin

PhD Student, Electrical Engineering and Computer Sciences
Profile | @angelacjin

Angela Jin is a second year Ph.D. student at UC Berkeley advised by Rediet Abebe. Previously, she was at Cornell University, where she received her B.S. degree in Computer Science in 2021. Her research interests lie at the intersection of human-computer interaction and machine learning, with a focus on bridging research and practice to build computational tools for scrutinizing algorithmic systems. Through her work, Angela strives to improve equity and access to opportunity for marginalized communities.

Research Focus: Design of sociotechnical systems to help defense attorneys adversarially test the reliability of evidentiary statistical software in the U.S. criminal legal system.

Zoe Kahn

Zoe Kahn

PhD Student, School of Information
LinkedIn | @zoebkahn

Zoe Kahn’s research explores how AI/ML systems may result in unanticipated dynamics, including harms to people and society. She uses qualitative methods to understand the perspectives and experiences of impacted communities; she then leverages storytelling to influence the design of technical systems and the policies that surround its use. Zoe has conducted fieldwork in rural communities in the United States, worked on issues of homelessness in the Bay Area, and is currently working on a project that uses data-intensive methods to allocate humanitarian aid to individuals experiencing extreme poverty in Togo. 

Research Focus: Development of empirically grounded stories from Togo and the Bay Area to help position policymakers and technologists to better account for the situated experiences, practices, and perspectives of impacted communities.

Zhouyan Liu

Zhouyan Liu

MPP Student, Goldman School of Public Policy
LinkedIn

Zhouyan Liu graduated from Peking University and was an investigative journalist for Beijing-based weekly news magazine Sanlian Lifeweek for four years, covering technology and politics. Zhouyan also worked part-time or interned in ByteDance (TikTok)’s global public policy team, the California Office of Digital Innovation and Cyber Policy Center at Stanford University. At UC Berkeley, Zhouyan is an MPP candidate at the Goldman School of Public Policy. His research interests include empirical studies on China’s technology policy, digital surveillance and privacy.

Research Focus: Analysis of China’s digital surveillance system and consequences to privacy rights, state capacity, and state-society relations.

Cedric Whitney

Cedric Whitney

PhD Student, School of Information
@CedricWhitney

Cedric Deslandes Whitney is a 3rd year PhD student at Berkeley’s School of Information, advised by Professors Jenna Burrell and Deirdre Mulligan. He is an NSF Graduate Research Fellow, and his background is in leading the deployment of federated machine learning infrastructures in healthcare. His research focuses on using mixed methods to tackle questions of AI governance, including previous work at the FTC on algorithmic disgorgement and at IBM on the right to be forgotten in AI systems.

Research Focus: Exploration of how algorithmic disgorgement (machine unlearning) can be effectively wielded in both compliance efforts and prospective legislation.

Contact

If you are a UC Berkeley student with inquiries about the application, or a faculty member or researcher in the field interested in collaboration or providing student mentorship, please contact Jessica Newman at jessica.newman@berkeley.edu. For media inquiries, please contact Charles Kapelke at ckapelke@berkeley.edu. Interested in supporting our work philanthropically? Shanti Corrigan at shanti@berkeley.edu can gladly facilitate introductions across our team of experts and clarify the impact gifts of all sizes can make to advance our mission.