The Center for Long-Term Cybersecurity at UC Berkeley is hosting an interdisciplinary workshop on July 15, 2020 to examine tensions and trade-offs at the intersection of emerging AI-related technologies, competition and antitrust policy, and both security and privacy. This RFP seeks proposals to conduct scholarly inquiry into these topics to inform the workshop and, more broadly, to build the body of scientific knowledge foundational to these issues. Selected proposals will be invited to present and discuss their research-in-progress at the workshop.
This call for papers seeks grounded arguments that examine tensions and trade-offs at the intersection of emerging machine learning-related technologies, competition and antitrust policy, and both security and privacy, in order to provide scientific insights that can be applied to policy discussions in this high stakes set of emerging opportunities and concerns.
Ubiquitous, always-on sensors, IoT devices, digital assistants, and AR/VR environments are enabling the creation of what we consider to be new classes of data. These represent new classes of data because they are collected in unique ways, and because they will be used to develop derivative products of distinctive value (for example as inputs to machine learning).
We are interested in new policies and models for distributing this value in ways that preserve privacy and security. We believe that the foundational requirements of functioning and sustainable markets are not now in place for many of these new data classes. For example, conventional privacy frameworks (such as notice and consent) have already been shown to be of limited value, and seem to make even less sense in a ubiquitous computing environment that is intentionally designed to be ‘in the background’. Conventional concepts of ownership, property rights, and exchange (such as the right to exclude, market price discovery, and meeting-of-the minds contracting) are complicated when data are so widely distributed, when the value of any particular data point is almost always by itself essentially zero, and when it is likely impossible to attribute value-add in a machine learning product across the billions of data inputs that helped to train and/or test the model.
We are interested in analysis of new data collection practices and processes, how to construct legitimate, efficient, and/or fair distribution of value from data sets that will be used to train ML models, the legal status and rights pertaining to those data sets and to the models, and the consequences for what can and cannot (or should and should not) be built from those models.
We do start from the assumption that some consent framework or equivalent is necessary; that some derivative value ought to be distributed; and that some form of negotiation, compromise or contracting is needed to enable markets to function sustainably. We welcome papers that seek to substantiate and try to operationalize those assumptions, and assess the inherent tension and trade-offs between them. We equally welcome papers that modify or undermine one or another of these assumptions.
This call is intentionally open to multiple disciplines and interdisciplinary perspectives, including critical analyses that offer entirely different understandings of the problem that point to alternative possible directions for better solutions. We welcome submissions from researchers at all stages of their careers, including graduate students, post-docs, faculty members, and practitioners from research labs in industry and civil society.
Proposals will be evaluated on the basis of scientific promise, potential impact on theory and practice, and potential for wide dissemination and use of knowledge, including specific plans for scholarly publications.
Proposals will be evaluated by an academic panel and selection is at the discretion of CLTC. Upon participation in the workshop, selected proposals will be awarded an honorarium of $7500 and travel reimbursement in accordance with UC Berkeley travel policy.
- Proposal abstract and outline (maximum two pages total). Because this is an interdisciplinary call for a broad range of papers, it is important that proposals situate their argument in the most relevant literature.
- Researcher’s CV
- To submit a proposal, please email the required materials to firstname.lastname@example.org by February 14, 2020.
- Proposal submission deadline: February 14, 2020
- Notification of results: March 2, 2020
- Draft papers due: July 1, 2020
- Workshop and draft paper discussion: July 15, 2020
This project is made possible by a grant from the William and Flora Hewlett Foundation with additional funding from Facebook in support of independent academic research.