The AI Policy Hub is an interdisciplinary initiative training UC Berkeley researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future.
We support annual cohorts of six outstanding UC Berkeley graduate students who conduct innovative research and produce policy deliverables that help reduce the harmful effects and amplify the benefits of artificial intelligence.
Our mission is to cultivate an interdisciplinary research community to anticipate and address policy opportunities for safe and beneficial AI.
Our vision is a future in which AI technologies do not exacerbate division, harm, violence, and inequity, but instead foster human connection and societal well-being.
We are housed at the AI Security Initiative, part of the University of California, Berkeley’s Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).
We also collaborate with other UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology (BCLT), the College of Engineering, and the Goldman School of Public Policy.
Interested in applying?
If you are a UC Berkeley graduate student interested in applying for the 2024 – 2025 AI Policy Hub Fellowship, please refer back to this webpage in Spring 2024 for further information.