News / August 2023

Meet the Fall ‘23 – Spring ‘24 AI Policy Hub Fellows

Six new graduate students from across the UC Berkeley campus have been selected to join the AI Policy Hub(opens in a new tab), an interdisciplinary center focused on translating scientific research into governance and policy frameworks to shape the future of artificial intelligence (AI).

The UC Berkeley AI Policy Hub is run by the AI Security Initiative, part of the Center for Long-Term Cybersecurity at the UC Berkeley School of Information, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).

“As AI becomes more pervasive in society, there is a critical need to train future leaders who can best ensure society benefits from this transformative technology,” says Brandie Nonnecke, Director of the CITRIS Policy Lab and Co-Director of the AI Policy Hub. “By bringing together graduate students conducting innovative AI research and providing training on how to translate their research into impactful policy deliverables, the AI Policy Hub is preparing future leaders with the knowledge and skills to best ensure AI is a force for good.”

Each of the students in the cohort will conduct independent research on a specialized topic related to AI, and will then use their findings to develop policy recommendations for realizing the potential benefits of AI, while managing harms and reducing the risk of devastating outcomes, including accidents, abuses, and systemic threats. The researchers will share findings through symposia, policy briefings, papers, and other resources, to inform policymakers and other AI decision-makers so they can act with foresight.

“We are excited to build on the successes of the past year and provide an unparalleled experience for the new cohort – from a world-renowned speaker series to hands-on workshops and a tight knit community,” says Jessica Newman, Director of the AI Security Initiative and Co-Director of the AI Policy Hub. “Each of the AI Policy Hub Fellows is working on a critical and cutting-edge AI policy challenge, and we look forward to seeing the impact of their work out in the world.”

Following are brief profiles of the six students in the Fall 23′ – Spring ’24 cohort:

Marwa Abdulhai is an AI PhD student at UC Berkeley advised by Sergey Levine. Her work explores how machine learning systems that directly communicate with or interact with humans—such as language models, dialogue systems, and recommendation systems—have led to wide-scale deceit and manipulation. Her work explores the concepts of deceit and manipulation in these systems and to build reinforcement learning algorithms with reward terms that prevent certain kinds of deception and align with human values.

Jessica Dai is a PhD student in EECS at UC Berkeley advised by Ben Recht and Nika Haghtalab. This project, in collaboration with Deb Raji, explores designing a framework for the general public to report and contest large-scale harms from algorithmic decision making systems (ADS). Most individuals have little control over when ADS are used, much less any ability to affect the design and oversight of its use. The work is particularly focused on ways to empower users to identify systematic patterns of mistakes made by an algorithm on groups of people that may not have been identified a priori, and that emerge only after a period of deployment.

Ritwik Gupta is an AI PhD student at UC Berkeley advised by Shankar Sastry, Trevor Darrell, and Janet Napolitano. His research aims to create computer vision methods that help first responders make better sense of a chaotic and unpredictable world, thus making aid provision more effective and efficient. Ritwik’s work is also focused on strengthening dual-use applications by translating advances in machine learning for HADR to new domains, such as addressing broader national security challenges.

Christian Ikeokwu is a PhD student in EECS advised by Christian Borgs and Jennifer Chayes. His work focuses on the risk that users may intentionally or unintentionally bypass the safety mechanisms of generative AI models, leading to unintended and potentially harmful outcomes. His project is helping develop algorithms to teach AI general safety “”meta-principles”” that it can apply in specific contexts so that AI safety mechanisms are able to generalize to inputs that are vastly different from the distribution it was initially trained with.

Janiya Peters is a PhD student at the UC Berkeley School of Information advised by Deirdre Mulligan. Her work explores the ways in which advancements in generative AI image models place visual creators in a vulnerable position and challenge intellectual property rights, as well as how visual creators adopt strategies of resistance to retain agency over their intellectual property, labor, and compensation. Her work seeks to identify sites of dispute between visual creators and Stable Diffusion models, and discern actions to obfuscate and/or repossess creators’ work. This work will inform potential policy interventions at the intersection of generative AI and creative expression.

Guru Vamsi Policharla is a Computer Science PhD student at UC Berkeley advised by Sanjam Garg. Guru’s project focuses on the potential of Cryptographic Proofs of Training to produce publicly verifiable proofs that an AI system is robust, fair, valid, and reliable without compromising the privacy of the underlying dataset or the machine learning model. This tool can be used to support accountability from companies deploying AI, especially those that limit public access on their training procedures and datasets by citing privacy issues and intellectual property protection.