Announcement / April 2024

UC Berkeley AI Policy Hub Now Accepting Applications for 2024-25 Cohort

Applications for the UC Berkeley AI Policy Hub are now open for the Fall 2024 – Spring 2025 Cohort!

If you are interested in applying or learning more we invite you to join us on Friday, April 26 from 10.30am to 12:00pm for the UC Berkeley AI Policy Research Symposium in the Banatao Auditorium of Sutardja Dai Hall at UC Berkeley. You will have the opportunity to hear from the co-directors and current AI Policy Hub Fellows as well as keynote presentations from Professors Niloufar Salehi and Ken Goldberg.

What are the benefits of participating in the AI Policy Hub?

Participants of the AI Policy Hub will have the opportunity to conduct innovative research and make meaningful contributions to the AI policy landscape, helping to reduce the harmful effects and amplify the benefits of artificial intelligence.

Program participants will receive faculty and staff mentorship, access to world-renowned experts and hands-on training sessions, connections with policymakers and other decision-makers, and opportunities to share their work at a public symposium. The AI Policy Hub will provide participants with practical training for AI policy career paths in the federal and state government, academia, think tanks, and industry. Selected participants will receive up to 50% Graduate Student Researcher (GSR) positions for a full academic year including Fall ‘24 and Spring ‘25 semesters, with tuition and fee remission for both semesters.

Who should apply?

A key goal of the AI Policy Hub is to strengthen interdisciplinary research approaches to AI policy while expanding inclusion of diverse perspectives. We encourage UC Berkeley students actively enrolled in graduate degree programs (Master’s and PhD students) from all departments and disciplines to apply.

What kinds of projects will be supported?

We want to support graduate level research and direct policy impact on the most pressing AI challenges. The most competitive candidates will have already completed some or much of their research and be ready to work on translating academic research for policymakers and other decision makers. While not limited to this, we are especially interested in projects that aim to help mitigate harmful societal implications of generative AI, general purpose AI, and foundation models.

Current topics of interest include, but are not limited to:

  • Innovative legislative/regulatory models for AI or interpretations of existing laws and oversight mechanisms in light of AI
  • Technical/governance processes for the validity, reliability, robustness, fairness, explainability, and transparency of generative AI systems
  • Responsible development and design of generative AI (e.g., data scraping, data protection, labor rights, safety and accountability mechanisms)
  • Responsible AI publication practices and policies (e.g. licensing, APIs, open-source, limited release, intellectual property rights, etc.)
  • Implications of generative AI for knowledge production, culture, democracy, and the economy
  • Monopolization and control vs. increasing access to AI development, infrastructure, and capabilities
  • Abuses of AI power (e.g., by governments, industry, or users resulting in: censorship, surveillance, human rights abuses, addictive or harmful design choices, dark patterns, toxic or harmful content, or disinformation)
  • Weaponization of AI (e.g., lethal autonomous weapon systems,  AI cyber weapons)
  • Identification and mitigation of AI-enabled harms to civil and political rights (e.g., in education, voting, policing, housing, employment, and healthcare)
  • Geopolitical dynamics and opportunities for international coordination
  • Standards, frameworks, benchmarks, or policies for the responsible development, deployment, or use of generative AI
  • Monitoring of AI accidents, incidents, and impacts
What are the expectations of participants?

During the one-year program, students are expected to:

  • Conduct innovative research that addresses one or more of the topics of interest
  • Publish research through a white paper and/or journal article
  • Translate research into at least one policy deliverable (e.g., op-ed, policy memo)
  • Present their work at the annual symposium
  • Participate in weekly team meetings and bi-weekly individual meetings
  • Participate in the workshops and speaker series events
  • Support fellow members of their cohort by providing feedback
What is the application process?

To apply, students must submit the form found here by Tuesday, May 14 at 11:59 PM (PDT). In addition to a short list of questions about you and your project, the form will require you to upload your CV and a document (2 pages max) describing your proposed project and its expected policy impacts.

Finalists will be invited to interview with AI Policy Hub directors. Decisions are expected to be made by the end of June and the selected students will be notified via email.

What is the AI Policy Hub?

The AI Policy Hub is an interdisciplinary initiative training UC Berkeley researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future.

We support annual cohorts of outstanding UC Berkeley graduate students who conduct innovative research and produce policy deliverables that help reduce the harmful effects and amplify the benefits of artificial intelligence.

Our mission is to cultivate an interdisciplinary research community to anticipate and address policy opportunities for safe and beneficial AI.

Our vision is a future in which AI technologies do not exacerbate division, harm, violence, and inequity, but instead foster human connection and societal well-being.

We are housed at the AI Security Initiative, part of the University of California, Berkeley’s Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).

We also collaborate with other UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Algorithmic Fairness and Opacity Group (AFOG), the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology (BCLT), the College of Engineering, and the Goldman School of Public Policy.

For more information, please see our website.

Questions?

If you have any questions about the application process, please contact Jessica Newman at jessica.newman@berkeley.edu.