Center for Long-Term Cybersecurity Announces 2017 Research Grantees

The Center for Long-Term Cybersecurity (CLTC) is pleased to announce the recipients of our 2017 research grants. In total, 28 different groups of researchers will share a total of over $1 million in funding.

The projects span a wide range of topics related to cybersecurity, including new methods for making crypto-currencies more secure; protecting health information stored on mobile devices; teaching high-school computer science students how to “program for privacy”; and exploring potential limits on the use of digital controls in nuclear reactors.

“It is great to see the diversity of grantees studying forward-leaning issues, from machine learning and artificial intelligence to new regulatory regimes for cybersecurity,” said Betsy Cooper, Executive Director of CLTC. “We are also very excited to welcome a host of new grantees into the CLTC family.”

The purpose of CLTC’s research funding is to not only address the most interesting and complex challenges of today’s socio-technical security environment, but also grapple with the broader challenges of the next decade’s environment. Research initiatives were sought in emerging areas like cyber risk and insurance; the security implications of the internet of things, machine learning, and artificial intelligence; innovative approaches to the problems of identification and authentication on the internet; addressing the ‘talent pipeline problem’ for cybersecurity; and new approaches to the regulatory landscape of cybersecurity.

“At a time when cybersecurity issues are becoming yet more prominent, profound, and potentially foundational to the stability of societies and economies, it’s a privilege for CLTC to support a range of relevant basic and applied research projects on the Berkeley campus,” said Steven Weber, Faculty Director for CLTC. “The Berkeley research community is creative and courageous as well as disciplined, and that is precisely what the cybersecurity world needs right now.”

This broad, future-oriented scoping of the cybersecurity challenge has allowed the CLTC to support a wide range of research projects. For example, one team will use virtual reality devices, together with wearable biometric devices and eye-tracking technologies, to test a form of personality assessment that can be used to screen candidates in law enforcement and other fields. Another group, led by researchers from UC Berkeley’s Human Rights Center, will identify what protocols can put in place to protect the security of activists, legal practitioners, and human rights abuse victims as their cases are being investigated.

The 28 winning proposals were chosen through review by a cross-disciplinary committee made up of UC Berkeley faculty members and administrators. Two types of grants were given: seed grants, generally below $15,000, are intended to fund an exploratory study, while discrete project grants of up to $100,000 were given to projects that have defined boundaries with clear outcomes and impact potential. All principal investigators (PIs) have a UC Berkeley research affiliation, and are enrolled in (or have completed) a graduate degree.

 

Summary Descriptions of CLTC 2017 Research Grantees

Below are short summaries of research projects that will be funded by the UC Berkeley Center for Long-Term Cybersecurity through 2017 research grants.

 

Addressing the Privacy Gaps in Healthcare
Ruzena Bajscy, Professor, EECS; Daniel Aranki, Ph.D. Candidate, EECS

The projected cost of cyber attacks on U.S. healthcare systems is over $305 billion over the next five years. To address this threat, this team will study a mathematical model of privacy that incorporates the factors related to privacy preferences that have been studied in social sciences. They will validate and refine the model through user studies, and develop an academic curriculum to encapsulate the existing and generated knowledge in an effort to ensure continuity of research among future generations.

 

Adversarially Robust Machine Learning
Sadia Afroz, Research Scientist, ICSI

Machine learning provides valuable methodologies for detecting and protecting against security attacks at scale. However, a machine-learning algorithm used for security is different from other domains because in a security setting, an adversary will try to adapt his behavior to avoid detection. This research team will explore methodologies for improving the robustness of a machine-learning classifier. This work will improve the understanding of the brittleness of machine-learning solutions and provide guidelines for improvement.

 

Allegro: A Framework for Practical Differential Privacy of SQL Queries
Dawn Song, Professor, EECS; Joseph Near, Postdoctoral Researcher, EECS

Current approaches for data security and privacy fail to reconcile the seemingly contradictory goals of leveraging data for positive outcomes while guaranteeing privacy protection for individuals. One promising approach is differential privacy, which allows general statistical analysis of data while providing individuals with a strong formal guarantee of privacy. This research team will design and develop techniques for practical privacy-preserving data analytics, enabling the use of advanced mechanisms like differential privacy in the real world.

 

Analysis of Security Breaches of Local Law Enforcement Agency Data

Catherine Crump, Assistant Clinical Professor, Berkeley Law & Acting Director of the Samuelson Law, Technology & Public Policy Clinic; Rena Coen, Internet Law & Policy Foundry Fellow; David Schlussel, J.D. Candidate, Berkeley Law.

This research project will conduct a preliminary assessment of whether security breaches of local law enforcement agency data are sufficiently numerous and serious as to be worthy of public concern and a policy response. If warranted, it will also set out tentative recommendations for policy changes. The study will entail a factual description of the nature and extent of local law enforcement data breaches, categorizing these breaches to provide an analytically useful picture of local law enforcement agency data breach vulnerabilities. It will also include a legal and policy analysis that may recommend statutory revisions related to this challenge.

 

Citizen Advocacy in a Connected World

Jason Danker, Andrea Gagliano, Paul Glenn, Molly Mahar, Sasha Volkov, Emily Witt, MIMS Students, UC Berkeley School of Information.

As connected city initiatives become increasingly common, the potential benefits, such as increased efficiency and improved security, come with significant potential harms, including privacy and security vulnerabilities, as well as the danger of exacerbating socioeconomic disparity. To bring awareness to these issues and encourage community-based discussion, the team plans to create a physical and transportable educational installation that will encourage community and advocacy organizations, governments, and citizen stakeholders to engage critically in the development of smart communities.

 

Computing on Encrypted Databases with No Information Leakage

Alessandro Chiesa, Assistant Professor, EECS; Raluca Ada Popa, Assistant Professor, EECS

Encrypted databases enable users to compute queries while the data remains encrypted, but encryption alone does not suffice to protect sensitive information because queries leak sensitive information through side channels. For example, factors such as the size of the output or the timing of a query may be leaked. This research team proposes to design, build, and evaluate an encrypted database that leaks no information about the data while still being practical for a representative class of applications.

 

Cyber-Infrastructures of the Personal Data Economy

Marion Fourcade, Professor, Sociology; Daniel Kluttz, Ph.D. Student, Sociology

This group will explore the political economy of personal data, specifically the transport, storage, circulation, and processing of individual data by commercial entities. Using in-depth, semi-structured interviews of experts and professionals in the data services sector, supplemented by direct observation of professional meetings and conferences, they will explore the question, what are the security challenges in this economy, as data from dispersed sources are increasingly pulled together and circulated within and between organizations?

 

Data Security Breach Notification in Singapore: The Role of the Data Privacy Officer in Enhancing Trust Relationships

Visakha Phusamruat, J.S.D. Candidate, Berkeley Law

This project will examine how the voluntary data security breach notification mechanism found in Singapore’s Personal Data Protection Act has been implemented by private organizations in their policies and practices. Applying comparative and qualitative methods, including interviews with chief data privacy officers, the researchers seek to understand organizational behavior in compliance with data security regulations, and to find a regulatory design that provides adequate assurance while encouraging organizations to take risks by acting in a trustful way beyond legal compliance toward affected individual consumers.

 

Exploring Internet Balkanization Through the Lens of Regional Discrimination

Jenna Burrell, Associate Professor, UC Berkeley School of Information; Anne Jonas, Ph.D. Student, UC Berkeley School of Information

The research study will examine practices of regional discrimination, in which users from select geographies are blocked from access to internet resources, often as part of efforts to prevent credit card fraud and other scams. Using a combination of qualitative interviews, crowdsourcing, and automated measurement modules, this research study will consider such questions as: How prevalent is regional discrimination and what are its characteristics? Which categories of websites are more likely to deploy regional discrimination? How do those responsible justify these mechanisms, and what are the implications for excluded users and excluding websites?

 

Human-Centered Design Study on Cybersecurity of Soft Co-Robotic Systems

Alice M. Agogino, Professor, Mechanical Engineering; Euiyoung Kim, Postdoctoral Design Fellow, Jacobs Institute for Design Innovation

A team of researchers will study interactions between humans and “co-robots,” with the purpose of identifying how sensitive personal information is generated and shared between humans and the Internet of Things (IoT)-connected co-robots. Using human-centered design research methods (e.g., interviews, observations, scenarios, surveys, prototyping, testing), they will help characterize information that is vulnerable to breach and thus require strengthening of the security of storage and communication in future co-robotic systems based on tensegrity structures.

 

Identifying Audio-Video Manipulation by Detecting Temporal Anomalies

Alexei Efros, Associate Professor, EECS, and Andrew Owens, Postdoctoral Scholar, EECS

Inexpensive recording devices, from cellphones to surveillance cameras, have made audiovisual data ubiquitous, while commercially-available software has made it significantly easier for people to manipulate/alter this data. It has become increasingly important to develop tools for verifying the authenticity of audiovisual data. This research will explore a new method, based on deep neural networks, for detecting fake or manipulated videos. The method will work by identifying situations in which the audio and visual streams of a video are misaligned, a common result of video manipulation.

 

Information Theoretic Methods in Cybersecurity

Venkatachalam Anantharam, Professor, EECS

Information theoretic security offers the strongest possible security guarantees, since information theoretically secure keys are unbreakable in principle, without the need for any hardness assumption. This project studies information theoretic key generation in the context of the Internet of Things (IoT), for which a large number of agents of limited capabilities and secret keys need to be created on demand by subsets of these agents for specific applications. The architecture envisioned is a process of interactive message exchanges that creates a distributed approximately shared key, from which, on demand, subsets of nodes can extract a secure key when needed.

 

Linking Behavioral and Physiological Responses in Virtual Reality to Privacy and Security Outcomes

Coye Cheshire, Associate Professor, UC Berkeley School of Information

Using controlled, laboratory-based designs, this team of researchers will explore how virtual reality (VR), combined with small wearable biometric devices and eye-tracking technologies, can help generate behavioral profiles in simulated social interaction scenarios. The results will be of interest to those who aim to discretely identify (and perhaps correct) individuals who are most likely make security and privacy-related mistakes, such as correctional officers or law enforcement officers.

 

New Frontiers in Public-Key Cryptography
Sanjam Garg, Assistant Professor, UC Berkeley Department of Computer Science; Daniel Masny, Postdoctoral Researcher, UC Berkeley Department of Computer Science

Use of cryptographic encryption methods for maintaining the privacy of our personal communications over the Internet is ubiquitous. However, as all our devices get connected to the Internet and all our data is shipped to third-party servers, new security problems arise. Unfortunately, traditional cryptographic encryption schemes are not suitable for many of these settings. The focus of this project is to realize sophisticated encryption schemes for securing the future Internet while making the minimal computational intractability assumptions essential to their realization.

 

Programming for Privacy in High School Classrooms
Gerald Friedland, Adjunct Assistant Professor, UC Berkeley, EECS, CITRIS

Learning about online privacy early (beginning in high school) can be an effective entree to complex topics in cybersecurity and networking, yet many teachers have expressed concern that such non-programming content might not be compatible with their schools’ computer-science standards. A team of researchers will develop “Teaching Privacy” lessons that integrate programming and privacy, with exercises in core areas, including API programming and “big data” style association rule mining exercises. The researchers will work with high school teachers to evaluate the exercises as part of a potential “Programming for Privacy” curriculum.

 

Secure Machine Learning
David Wagner, Professor, EECS

Machine learning is used for many purposes, and the great success of deep learning has recently stimulated considerable interest in applying machine learning to many domains. However, researchers have recently discovered that deep learning appears to be deeply vulnerable to attack: it is possible to construct malicious inputs that fool the learning method. This study will examine how to harden machine learning against these attacks, providing a more robust foundation for applications that use machine learning in settings where security is necessary.

 

Secure & Usable Backup Authentication
David Wagner, Professor, EECS; Serge Egelman, Director, Usable Security and Privacy Research, ICSI; Nathan Malkin, Ph.D. Candidate, EECS

Backup authentication is a crucial yet often overlooked problem in cybersecurity. Passwords and other methods of authentication are fixtures of digital life, but the processes by which we recover our passwords and other authentication methods are less well understood or studied. This research will focus on making backup authentication more secure by going beyond the conventional methods, including comprehensively designing and studying “social authentication” systems, which allow users to authenticate by leveraging their social networks.

 

Securing Protected Health Information in Mobile Health Devices
Anil Aswani, Assistant Professor, IEOR; Xin Guo, Professor, IEOR

These researchers will address cybersecurity problems related to protected health information (PHI) on mobile health devices. Drawing upon theory from statistics, machine learning, principal-agent modeling, and mean-field game models, they will design scalable machine-learning methods to identify duplicate or stolen/counterfeit PHI in order to secure the use of PHI for identification and authentication of patients, and they will study the problem of incentivizing investments and software updates for cybersecurity of PHI on mobile health devices.

 

Sharing Personal Information with Humanlike Machines: The Role of Human Cues in Anthropomorphism and Trust in Machines
Juliana Schroeder, Assistant Professor of Management Operations, Haas School of Business; Matthew Schroeder, M.S., CISSP, CEH, CSEP, Senior Cybersecurity Professional, Threat and Vulnerability Management Lead

These researchers will explore the degree to which humans’ trust in machines depends on whether they believe the machine has a humanlike mind, with the capacity to think and feel. Integrating research in social psychology and human-computer interaction, they will build a new theoretical model of anthropomorphism in which they experimentally test the marginal added contribution of different types of human cues (e.g., language, face, voice) on the belief that a machine has a humanlike mind.

 

Strengthening Cybersecurity in Human Rights Investigations
Alexa Koenig, Executive Director, Human Rights Center and Lecturer, Law and Legal Studies; Eric Stover, Faculty Director, Human Rights Center and Adjunct Professor, Law and Public Health

International courts and human rights organizations are increasingly using publicly-available information (YouTube videos, Facebook photos, Tweets, etc.) to support investigations of human rights violations and war crimes. Researchers from UC Berkeley’s Human Rights Center will identify and share best cybersecurity practices for open-source investigations and digital verification of video and images, which could have potential benefits for international courts and human rights victims around the world.

 

Toward One-Step, Three-Factor Authentication Using Custom-Fit Ear EEG
John Chuang, Professor, UC Berkeley School of Information

A team of researchers will conduct the first experimental research into one-step, three-factor authentication, by using custom-fitted earpieces that measure EEG (electroencephalogram) brainwave signals. Through this method, a user can perform a single mental task to present three authenticators at once: a knowledge factor (their chosen secret thought and/or mental task), an inherence factor (their brainwave signals as a form of biometric), and a possession factor (the EEG sensing earpiece that is custom-fitted to their ear). The research will assess the accuracy and usability of this authentication method.

 

When to Avoid Digital Control: A Cybersecurity Case Study for Advanced Nuclear Reactors
Per Peterson, William and Jean McCallum Floyd Chair, Department of Nuclear Engineering; Michael Nacht, Thomas and Alison Schneider Chair, Goldman School of Public Policy; Charalampos Andreades, Postdoctoral Researcher, Department of Nuclear Engineering

This team proposes to identify and study the major issues associated with design of the interface between digital control for normal operation of nuclear reactors, where digital control may be unreliable or even create control feedback that is deliberately unsafe (e.g. cyberattacks), and passively safe reactor designs, where disconnecting digital control can render the facility safe. This project will create a forum to discuss what the limits of digital control should be, and where critical infrastructure should be designed to function appropriately by intrinsic or analog feedback, independent of digital control.

 

Zero Knowledge Proofs for Privacy-Preserving Cryptocurrencies
Alessandro Chiesa, Assistant Professor, UC Berkeley, EECS

Cryptocurrencies have the potential to change the way we conduct payments across the globe. However, current systems are “risky” because they do not provide privacy to users; this lack of privacy not only affects individual users but also skews the economic properties of the currency. Zero knowledge proofs have been shown to be one of the main tools to endow cryptocurrencies with suitable privacy, and have been deployed in the real world. This project aims to add new features to libsnark, a leading open-source library for zero knowledge proofs.

 

Projects Jointly Funded with the Center for Technology, Society & Policy

 

Actuarial Justice in the 21st Century

Johann Koehler, Ph.D. Candidate, Jurisprudence and Social Policy and J.D. Candidate, Berkeley Law; Gil Rothschild Elyassi, Ph.D. Candidate, Jurisprudence and Social Policy and Berkeley Law

Court actors increasingly rely on statistical predictions about an accused person’s future behaviors to inform judgments, as part of an administrative style called “actuarial justice.” Increasingly, actuarial instruments now inform every decision point, such that actuarial justice and the logic of data science has become an organizing principle for the administration of penalty as a whole. This project will ask how actuarial justice developed, inquire as to its scope, and explore its promises and pitfalls.

 

I Regret to Inform You that Your Private Information Has Been Compromised

Naniette H. Coleman, Ph.D. Candidate, UC Berkeley, Sociology; Andrew Yang, Undergraduate, UC Berkeley; Tiffany Lo, Undergraduate, UC Berkeley; Amanda Lee, Undergraduate, Wellesley College

This research team will expand the information available and accessible to the public on privacy, data protection and cybersecurity. Specifically, the team will collaborate with the Wiki-Ed Foundation and the UC American Cultures Librarian to establish a “Wikipedia Student Working Group on Privacy Literacy” at U.C. Berkeley and hold regular Wiki-Edit-A-Thons with a goal of launching a website highlighting the innovative work being done in the privacy, data protection, and cybersecurity space. In addition, the team will work to build an interdisciplinary privacy community at U.C. Berkeley.

 

Mapping Sites of Politics: Values at Stake in Mitigating Toxic News

Daniel Griffin, Ph.D. Student, UC Berkeley School of Information

Many have questioned the influence that “fake news” may have had in the 2016 US presidential election, yet many of the approaches proposed for remedying this challenge may act to censor or chill online discourse, separate online communities, destroy legitimate debate, and create centralized points of media control. This project will inform and support the design and critique of proposals and approaches to mitigating “fake news” by inserting values and a long-term view into the conversation. The project will develop metrics from a framework of democratic values implicated in proposed countermeasures, and will run scenario thinking and value-fictions workshops to explore the implications of proposals and test the values framework.

 

Preparing for Blockchain: Policy Implications and Challenges for the Financial Industry

Ritt Keerati, M.P.P. Student, Goldman School of Public Policy; Chloe Brown, M.P.A. Student, Goldman School of Public Policy

Blockchain―a distributed ledger technology that maintains a continuously-growing list of records―is an emerging technology that has captured the imagination and investment of Silicon Valley and Wall Street. The technology originally propelled the emergence of virtual currencies such as Bitcoin, and now holds promise to revolutionize a variety of industries including, most notably, the financial sector. This project endeavors to understand potential policy implications of blockchain technology, particularly as it relates to the financial industry, and help policymakers address their strategic planning and risk-assessment needs to prepare for the rise of blockchain technology in the near future.

 

Tools and Methods for Inferring Demographic Bias in Social Media datasets

Samuel Maurer, Ph.D. Candidate, UC Berkeley City & Regional Planning

Social media posts from smartphones are an increasingly useful data source for researchers and policymakers. For example, place-based posts can help city planners assess how infrastructure or public space is being used, and help identify the needs of different communities. But it is important to know who is represented in these data streams and who may be missing. This project will develop practical tools and methods for inferring demographic biases, using rule-based algorithms to determine the neighborhoods where frequent posters live, and then compare the demographic characteristics of these places with the population at large, thereby helping identify biases in characteristics like race, income, or education.