Launched in 2022, the AI Policy Hub trains UC Berkeley researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future. Each year, the Hub supports cohorts of six UC Berkeley graduate students as they conduct innovative research and produce policy deliverables aimed at reducing the harmful effects and amplifying the benefits of AI. (Applications for the 2024-2025 Cohort are due on May 14, 2024. Learn more.)
On April 26, the Spring 2023 – Fall 2024 AI Policy Hub Fellows will present their research at the AI Policy Research Symposium, to be held in the Banatao Auditorium of Sutardja Dai Hall at UC Berkeley. The event will feature keynote presentations from distinguished UC Berkeley faculty members Ken Goldberg and Niloufar Salehi. Topics to be discussed include human-robot complementarity, human-centered AI, cryptography for AI auditing, resistance to text-to-image generators in creator communities, safety “meta-principles” for generative AI, computer vision for humanitarian assistance and disaster response, deception in AI systems, and collective action for algorithmic accountability.
To offer a “sneak peek” of what to expect at the AI Policy Research Symposium, we reached out to three of the Fellows: Jessica Dai, Ritwik Gupta, and Janiya Peters.
Q&A with Jessica Dai
Jessica Dai is a PhD student in EECS at UC Berkeley advised by Ben Recht and Nika Haghtalab.
How would you describe the goal of your research, and what is the potential importance of this work? Some of my high-level interests are how we should think about auditing and evaluations for algorithmic systems, and how we can aggregate individual experience to learn about more holistic evaluations of a system. For this project, we’re trying to think about how we might make use of crowdsourced reports to make quantitative assessments of whether some system affects different demographic subgroups disparately, i.e., some form of discrimination may be occurring.
How are you going about conducting your research? From a technical standpoint, this project sits at the intersection of several distinct statistical and algorithmic challenges. We’re working to develop a method that can identify disparity with provable statistical guarantees both in theory and in practice.
What would you want to say to policymakers or industry leaders about what they could/should be doing differently, based on your research Establishing a system for reporting is important, but perhaps it’s even more important to have some way to (a) learn from and (b) take action based on those reports.
Q&A with Ritwik Gupta
Ritwik Gupta is an AI PhD student at UC Berkeley advised by Shankar Sastry, Trevor Darrell, and Janet Napolitano.
How would you describe your research for a lay audience? What is your main goal, and what is the potential importance of this work? I build new AI methods that are able to reason in complex, chaotic, and ever-changing situations such as for humanitarian assistance and disaster response. These same methods can be used outside of their intended applications, such as for military affairs. My work analyzes what these blurred boundaries are, what policies should be created to govern these dual-use matters, and how different agencies should interact accordingly.
How are you going about conducting your research? What are your methods? I build new computer vision methods that can generalize under distribution shift, as well as computer vision models that can handle specific constraints of data used in humanitarian assistance and disaster response. I then translate those methods to the field and partner with first responders and operators to test these methods in real life. This ethnographic experience is then utilized to make the methods better.
What would you want to say to policymakers or industry leaders about what they could/should be doing differently, based on your research? There is not sufficient coordination between state and local governments who are primarily in charge of domestic disaster response operations and federal agencies which have massive resources usually used for military affairs. We need to bridge this technological and policy gap effectively and quickly.
What impact has securing the AI Policy Hub Fellowship made to you personally? Professionally? I have been able to build a new set of collaborators in this cohort which has shaped my AI research!
Q&A with Janiya Peters
Janiya Peters is a PhD student at the UC Berkeley School of Information, advised by Deirdre Mulligan.
How would you describe your research for a lay audience? What is your main goal, and what is the potential importance of this work? My research explores the ways in which text-to-image generators compromise visual artists’ intellectual property rights; and how visual artists adopt strategies of resistance to retain agency over their intellectual property, labor and compensation. The goal of this project is to locate sites of dispute–including user-platform contracts, privacy settings, and general breakdowns in copyright governance — between these models and affected communities. This project aims to formalize artists’ concerns into protocols centered in “consent, credit, compensation, control [and transparency]” as articulated by creative leaders, including renowned concept artists and illustrators Karla Ortiz and Steven Zapata, at the Federal Trade Commission’s “Creative Economy and Generative AI” panel.
How are you going about conducting your research? What are your methods? This research could not be done without the willing participation of interview participants, including illustrators, painters, sculptors, and UX and graphic designers. We engaged in critical discussions on attitudes towards text-to-image generators, and current mechanisms for retaining and/or enforcing rights over their work. Some participants surfaced artifacts — artwork, creative software programs, or presentations–for review. The diversity of mediums, specialties, and experiences helped us understand how text-to-image generators are changing different creative industries, and where artists push back against these mechanisms within their workflows. The collaborative nature of this work is crucial, and I thank each artist who has contributed their knowledge and expertise.
What impact has securing the AI Policy Hub Fellowship made to you personally and professionally? The AI Policy Hub Fellowship was critical towards my development as an Information scholar, who is constantly questioning the ramifications of new media systems within our social fabric. Jessica Newman and Brandie Nonnecke [the co-directors of the AI Policy Hub] were instrumental in providing writing workshops, meetings with policymakers and technical specialists, and collaborative opportunities. With their help, I translated my research into actionable protocols, including suggestions to the U.S. Copyright Office on incorporating consent-based approaches into DMCA takedown notice procedures. I hope to continue my research advocacy into the summer as a Library of Congress Junior Fellow working with the Connecting Communities Digital Initiative (CCDI) team.