News / March 2026

UC Berkeley’s AI Policy Hub Celebrates a New Generation of Leaders

AI Policy Hub logo

Artificial intelligence is advancing rapidly, ushering in an array of potentially catastrophic risks, yet policymakers and industry leaders have struggled to keep up with the pace of change.

In 2022, with support from the Future of Life Institute, the Center for Long-Term Cybersecurity (CLTC) partnered with the CITRIS Policy Lab to launch the AI Policy Hub (AIPH), an initiative dedicated to training UC Berkeley graduate students from diverse disciplines to develop governance and policy frameworks to guide the future of AI.

To date, three cohorts of AI Policy Hub Fellows (18 researchers in total) have participated in the program, which aims to expand and diversify the pool of leaders in government and industry with AI policy expertise. Through an immersive curriculum, AIPH Fellows engage directly with world-renowned faculty advisors from UC Berkeley and refine their work through rigorous peer collaboration and public capstone presentations. 

CLTC is currently working to package and circulate materials from the AI Policy Hub Fellows (including syllabi and lecture transcripts) so that other institutions can access lessons learned and replicate best practices from the program. As we move to share this successful curricular model, the AI Policy Hub’s alumni stand as proof of the program’s impact. Whether in frontier AI labs or the halls of government, AI Policy Hub alumni are serving as “policy accelerators,” translating technical expertise into actionable solutions — and stepping in to provide a new generation of leadership in AI.

In a February 2026 survey, nearly all (86%) of past AIPH Fellows gave either a 4 or 5 (out of 5) rating on whether the program influenced their decision or ability to pursue a career in AI policy and safety, and the same number agreed that the fellowship prepared them “to engage with the complex technical and political realities” of their current role. Most of the program’s alumni are currently working on high-leverage “frontier” challenges, including privacy and security risks of powerful models, AI safety evaluations, and long-term governance frameworks.

We caught up with some of our past AI Policy Hub Fellows to find out what they’re up to now — and what impact the program has had on their lives. 

2022-2023 Cohort: The Pioneers

Bridging the gap between technical reality and AI governance. 

2022-2023 AI Policy Hub Fellows. From top left to right: Alexander Asemota, Micah Carroll, Angela Jin, Zoe Kahn, Zhouyan Liu, and Cedric Whitney.

The inaugural class of the AI Policy Hub served as the program’s “proof of concept,” demonstrating that effective governance requires fluency in both technical systems and policy frameworks. Since completing their fellowships, most members of this group have transitioned directly into high-leverage roles in which an interdisciplinary skill set is crucial for success.

Alumni Micah Carroll and Cedric Whitney are driving safety research at OpenAI, translating policy concepts into engineering realities. Whitney was a co-author of the OpenAI GPT-OSS Model Card (August 2025), which defined the safety thresholds for OpenAI’s open-weight models. Meanwhile, Carroll is pursuing research that provides the scientific backbone for measuring AI manipulation; he co-authored a paper that was published at a top-tier machine learning conference.

Beyond the frontier labs, researchers from the first AI Policy Hub cohort are advancing academic research that translates into industry. Zoe Kahn, for example, is a Postdoctoral Researcher in the Germany-based Research Center for Trustworthy Data Science, where she studies the perspectives of AI researchers and practitioners around questions of ethics and safety, and works to strengthen participatory technology design.

“I explore how people living at the margins can meaningfully participate in the design of digital tech and tech policy — especially those shaping public life, governance, and digital rights,” Kahn says. “The AI Policy Hub Fellowship helped me learn how to translate dense academic research into approachable and actionable guidance for policymakers.”

Alexander Asemota, currently a Fellow with the Berkeley Institute of Data Science (BIDS), has continued work to solve the technical bottlenecks of AI fairness and auditing. Angela Jinserves as a Technical Lead for Hack4Impact, driving engineering solutions for social good. And Zhouyan Liu has continued to conduct research on data, privacy, property and technology, both in the U.S. and in his home country of China.

On the whole, the career trajectories of the Hub’s first cohort reflect the program’s value as a pipeline from academia into the “engine room” of AI industry and practice.

2023-2024 Cohort: The Policy Shapers

Embedding technical defense into national infrastructure

From left to right: Marwa Abdulhai, Janiya Peters, Christian Ikeokwu, Jessica Newman, Ritwik Gupta, and Jessica Dai.

The second cohort of AI Policy Hub Fellows significantly widened the aperture of the program’s impact, as some moved beyond the research lab into the halls of federal government and national security. This group’s post-fellowship trajectory has been defined by active service and operational defense, as they have bridged the gap between theoretical safety and national policy implementation.

Ritwik Gupta exemplifies this shift: while at the AI Policy Hub, he served as Deputy Technical Director for Autonomy at the Defense Innovation Unit  (DIU), and he later went on to serve as an Advisor on AI Policy to the FBI.

Janiya Peters brought her experience as an AIPH Fellow to the Library of Congress’ Connecting Communities Digital Initiative (CCDI). Meanwhile, researchers like Christian Ikeokwu, who studies AI jailbreaking and safety, and Jessica Dai, whose work focuses on algorithmic auditing, have continued to develop the technical methods necessary to hold AI systems accountable.

As a Cooperative AI fellow, Marwa Abdulhai is designing methods to evaluate and reduce deceptive behavior in AI dialogue systems. And Guru Vamsi Policharla is still a PhD student and is continuing “work on building cryptographic tools for accountability in AI.”

2024-2025 Cohort: Actionable Governance

Operationalizing safety across high-stakes sectors

From left to right: Syomantak Chaudhuri, Jaiden Fairoze, Ruby Han, Laura Pathak, Ezinne Nwankwo, and Audrey Mitchell.

In the face of a fast-moving regulatory landscape, the most recent cohort of AI Policy Hub Fellows have focused on the implementation of safety norms in specific, high-stakes sectors. Rather than working on broad “AI policy,” these fellows are carving out niches in law, healthcare, and social services.

Audrey Mitchell is a third-year law student at Berkeley Law and Harvard Law School, as well as a research assistant at the Berkman Klein Center, and is translating AI risks for the legal system and litigation. “I address interdisciplinary questions on the cutting edge of AI, focusing on risks or concerns that are on the horizon but may not yet have received significantly scholarly attention,” Mitchell says. “Currently, I’m considering how we should think about AI communication tools that filter or edit outgoing and incoming text messages for tone…. I’m focusing on such AI tools available via app to post-divorce co-parents, and comparing them to theories of restorative justice in family law and normative ideas of emotional regulation and autonomy generally.”

Read a blog post by Mitchell on the challenge of deepfakes and legal evidence, and enjoy her podcast on Rules of Evidence in the Age of AI.

Syomantak Chaudhuri, a PhD Candidate in the UC Berkeley Department of Electrical Engineering and Computer Sciences (EECS), studies how to provide optimally different levels of privacy to different users. Chaudhuri says the AI Policy Hub helped him “understand the landscape of policymaking and what goes into making actual changes in the real world.”

Laura Pathak, a PhD Candidate in the UC Berkeley School of Social Welfare, has integrated her experience at the AI Policy Hub into her dissertation research. “Becoming well-versed in the interdisciplinary AI policy field gave me a holistic and rigorous theoretical grounding for conducting more nuanced and multi-level analysis of the ethical, policy, and practice challenges of implementing AI in social work services,” Pathak says.

Ezinne Nwankwo, a PhD. student in Computer Science, uses statistical and machine learning methods to better understand society (using social data) and to aid in decision-making processes. She is investigating ways to incorporate expert and community preferences into machine learning models, particularly in low and middle-income economies where data is often scarce. 

Mengyu (Ruby) Han is advancing policy through the Silicon Valley Leadership Group, which has established a new AI institute, the Institute for California AI Policy (I-CAP), focused on bridging the gap between Silicon Valley and Sacramento. “My current work focuses on bridging policymakers and technical expertise in AI, in the long-term hope to inform governance frameworks and policies both for external public-facing AI and potentially internal government adoption,” Han says. She notes that the AI Policy Hub gave her the opportunity to “engage with practitioners in different sectors: industry, academia, and civil society. It taught me how to leverage network in different situations and what resources to turn to for solving policy problems.”

On the technical side, Jaiden Fairoze (now a visiting researcher at Meta) is working on the verification systems needed to enforce new standards. “My current position at Meta involves evaluating the privacy/security risks of powerful AI models,” Fairoze says. “Prior to my time with the AI Policy Hub, I was completely removed from the policy side of my research. Now, I understand the wider impact that research on AI and cryptography can have.”

The AI Policy Hub is guided by eleven distinguished Berkeley faculty advisors, including Deirdre Mulligan, Professor in the School of Information and former White House Principal Deputy Chief Technology Officer in the Office of Science and Technology Policy. Professor Mulligan was recently appointed to the California Innovation Council, a newly formed bridge between academia and state leadership designed to ensure California remains a global AI leader while prioritizing public safety.

Professor Mulligan noted, “The AI Policy Hub has a powerful track record of cultivating a new generation of leaders who possess the rare technical and policy fluency required to steer AI over the coming decades.”