CLTC is excited to announce the six new graduate students from across the UC Berkeley campus who have been selected to join the AI Policy Hub, an interdisciplinary center focused on translating scientific research into governance and policy frameworks to shape the future of artificial intelligence (AI).
The UC Berkeley AI Policy Hub is run by the AI Security Initiative, part of the Center for Long-Term Cybersecurity at the UC Berkeley School of Information, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).
Each of the students in the cohort will conduct innovative research and make meaningful contributions to the AI policy landscape, helping to reduce the harmful effects and amplify the benefits of artificial intelligence. The researchers will share findings through symposia, policy briefings, papers, and other resources, to inform policymakers and other AI decision-makers so they can act with foresight.
“The AI Policy Hub plays an important role in the UC Berkeley AI ecosystem – bringing together graduate students from across departments and disciplines to collaborate on some of the most pressing societal challenges we face amidst the rise of AI and generative AI across industries,” says Jessica Newman, Director of the AI Security Initiative and Co-Director of the AI Policy Hub. “We look forward to seeing thoughtful research and timely policy impact from the Fellows over the coming year.”
Following are brief profiles of the six students in the Fall 24′ – Spring ’25 cohort:
Syomantak Chaudhuri is a PhD Candidate at the UC Berkeley Department of Electrical Engineering and Computer Sciences (EECS) advised by Tom Courtade. His work explores heterogeneous differential privacy for users. In today’s digital landscape, the invasive nature of online data collection poses significant challenges to user privacy. Using the framework of Differential Privacy for privacy analysis, his research focuses on providing individual users with their desired level of privacy while bridging the gap between user expectations and industry practices.
Jaiden Fairoze is a PhD Student at the EECS department advised by Professor Sanjam Garg. His project will examine using cryptographic tools for the prevention of misinformation. As generative AI tools become more accessible and the quality of their output continues to improve, it is crucial to implement mechanisms that can reliably answer the question: “Is this content AI-generated or human-created?” His research leverages cryptography to provide reliable solutions to AI traceability problems, with a particular focus on ensuring strong authenticity guarantees.
Mengyu (Ruby) Han is a Master of Public Policy Candidate at the UC Berkeley Goldman School of Public Policy. Her research interest is in the intersection of technology policy and international policy, specifically in 5G development and related industrial policies. Given the risks of generative AI being easily exploitable to exacerbate international adversarial relations, her project investigates how leading countries can cooperate with each other to create a more transparent regulatory environment and support international peacekeeping.
Audrey Mitchell is a J.D. Candidate at the UC Berkeley School of Law where she is exploring whether current legal rules provide adequate safeguards for AI use during legal proceedings. Her work seeks to research three rule structures: The Federal Rules of Evidence, The Federal Rules of Civil Procedure, and Judge-specific standing orders – to analyze how they have been creatively utilized so far to respond to the new challenges that AI brings, and identify their shortcomings in practice. The goal of her research is to operate within the existing legal rule structures while advocating for consistent, fair, and reliable amendments to those structures in order to provide codified protections for litigants that may be impacted by generative AI.
Ezinne Nwankwo is a PhD Student at the EECS department where her research focuses on using statistics and machine learning (ML) as a way of understanding social issues and improving equity and access for underserved communities. Her work will map out the current landscape of AI/ML and homelessness services, with a focus on the needs and perspectives of a local nonprofit agency, on the ground street outreach workers, and the communities they work with. She seeks to understand how the goals, principles, and patterns of ML research and social services to date have aligned with key stakeholders in the full pipeline of homelessness services. Her project will provide guidance for future research at the intersection of AI and homelessness and best practices that will foster successful collaborations amongst AI researchers, non-profits, and government agencies who hope to use AI to provide equitable futures for vulnerable communities.
Laura Pathak is a PhD Student at the UC Berkeley School of Social Welfare where her research explores integrating generative AI (GenAI) to streamline health and human service delivery with the goal of improving care quality, widening access, and reducing costs while simultaneously ameliorating deep-seated service inequities impacting racial and ethnic minorities, youth, immigrants, people of low socioeconomic status, and those living in rural areas. Her project will map and analyze the current landscape of global participatory policies, frameworks, structures, and initiatives that (a) harness public participation in AI/ML design, adoption, and oversight; and (b) are relevant to future sensitive uses of GenAI in health and human services. Her study aims to provide policy recommendations for creating effective and meaningful public participation structures for GenAI accountability in U.S. public services.