The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2019 research grants. In total, 30 different groups of researchers will share a total of roughly $1.3 million in funding to support a broad range of initiatives related to cybersecurity and digital security issues emerging at the intersection of technology and society.
The purpose of CLTC’s research funding is to address the most interesting and complex challenges of today’s socio-technical security environment, and particularly to grapple with the broader challenges of the next decade’s environment. The Center focuses its research in four priority areas: machine learning and artificial intelligence, building the cyber-talent pipeline, improving cybersecurity governance, and protecting vulnerable online populations.
Some of the projects are renewals of previously-funded projects that have already yielded important results, including research on “passthoughts,” adversarial machine learning, and game apps that collect children’s data. New initiatives to be funded include protecting users from phone phishing (voice phishing); improving the cybersecurity of fall-detection systems and other health devices commonly used by the elderly; understanding the privacy implications of smart home devices for domestic workers; using blockchain for aggregating data for machine learning; and more.
All principal investigators (PIs) have a UC Berkeley research affiliation, and many of the initiatives involve partners from outside institutions. The winning initiatives include researchers from a broad array of disciplines and academic units, including the Department of Electrical Engineering and Computer Science (EECS), the School of Information, the International Computer Science Institute, and The Simons Institute, as well as Department of Jurisprudence and Social Policy and other social science units.
“We are excited to provide funding to this outstanding group of researchers, whose work is addressing important emerging issues in security, privacy, and other domains,” said Ann Cleaveland, Executive Director of CLTC. “There is no ‘silver bullet’ for cybersecurity. The diverse projects of our 2019 cohort reflect that information security is multi-faceted, spanning technical work, social practices, and the discourses that surround them in public life.”
CLTC awards two types of grants: seed grants, generally below $15,000, are intended to fund an exploratory study, while discrete project grants of up to $100,000 fund projects that have defined boundaries with clear outcomes and impact potential.
“We are honored to support this cross-disciplinary mix of research projects,” said Steven Weber, Faculty Director of CLTC and Professor in the UC Berkeley School of Information. “These projects are directly aligned with CLTC’s mission to explore what lies over the horizon when it comes to the security implications of people and digital technology.”
Summary Descriptions of CLTC 2019 Research Grantees
Below are short summaries of new research projects that will be funded by the UC Berkeley Center for Long-Term Cybersecurity through 2019 research grants.
Automatic Guidance for Privacy-Aware Browsing
Gerald Friedland, Adjunct Assistant Professor, UC Berkeley Department of Electrical Engineering and Computer Science (EECS)
The recent change in EU law (as well as, for example, the Facebook login key activation scheme) requires most authentication-enabled websites to have a privacy policy under /privacy, even when they go through Google or GitHub. This seed project aims to develop a prototype browser plugin that automatically reads out the privacy policy page and uses natural language processing technology to parse it and compare it with other policies, identifying elements in common with baseline policies. Based on the assessment of the policy, the tool would then display a green, yellow, or red square in the browser, alerting the user to potential issues of concern.
Covert Embodied Choice: Using Physiology Tracking in VR to Explore the Limits of Privacy During Decision-making
John Chuang, Professor, UC Berkeley School of Information; Coye Cheshire, Professor, UC Berkeley School of Information; Jeremy Gordon, PhD Student, UC Berkeley School of Information; Max Curran, PhD Candidate, UC Berkeley School of Information
Interest in signals captured about individuals by an array of sensing devices will continue to grow as algorithms engineered to predict not only our attributes, but our future choices, become increasingly effective. A fundamental risk emerges from the fact that individuals still hold significant misperceptions about the sensitivity of the information that can be inferred from this data, and what agency they have in protecting their privacy when their fine-grained (and possibly involuntary) behavior is tracked. This team’s research will directly address this problem by examining how individuals obfuscate their intent in situations where the data-driven prediction of their choices may pose a threat to privacy and perhaps autonomy. They will leverage a variety of technology platforms to capture key motor and physiological signals. Analysis of these data streams will provide insight into people’s prior beliefs and strategic choices, as well as the sensitivity of and risks pertaining to physiology tracking in this setting.
The Cybersecurity of “Smart” Infrastructure Systems
Alison Post, Associate Professor, UC Berkeley Department of Political Science, Global Metropolitan Studies; Karen Frick, Associate Professor, City & Regional Planning, UC Berkeley; Marti Hearst, Professor, UC Berkeley School of Information and EECS; Kenichi Soga, Chancellor’s Professor, UC Berkeley Department of Civil and Environmental Engineering; Tim Marple, PhD Student, UC Berkeley Department of Political Science
Urban infrastructure such as water and sanitation systems, subways, power grids, and flood defense systems are crucial for social and economic life, yet are vulnerable to natural hazards that could disrupt services, such as earthquakes or floods. New sensor systems can potentially provide early warnings of problems, and thus help avert system failure or allow for evacuations before catastrophes. However, introducing such systems can increase the risk of cyberattack. This project will examine perceptions regarding the countervailing risks posed to infrastructure systems by natural hazards on the one hand, and cyberattacks following the introduction of new sensor systems on the other hand. The team will also design and evaluate the efficacy of new approaches to communicating these countervailing risks, drawing on recent advances in data visualization and political psychology.
Cybersecurity Toolkits for/of the Future: A Human-Centered Computing and Design Research Approach
James Pierce, Research Engineer, CITRIS, UC Berkeley, Assistant Professor, Design, California College of the Arts; Richmond Wong, PhD Candidate, UC Berkeley School of Information; Sarah Fox, Postdoctoral Researcher, Department of Communication, The Design Lab, UC San Diego; Nick Merrill, Postdoctoral Scholar, Center for Long-Term Cybersecurity, UC Berkeley
The cybersecurity toolkit—collections of digital tools, tutorials, tips, best practices, and other recommendations—has emerged as a popular approach for preventing and addressing cybersecurity threats and attacks. Often these toolkits are oriented toward vulnerable populations who have unique and pressing needs related to cybersecurity, but may not have access to the resources of large governments, corporations, or other organizations. Many such tools are designed specifically for journalists, researchers, political activists, and certain minority groups, such as refugees and LGBTQ youth. This project is concerned with studying this category of cybersecurity tools to inform the design and development of cybersecurity toolkits for both near-term and far-term futures. The project will adopt a mixed methodological approach rooted in human-computer interaction (HCI) and design.
Data Privacy: Foundations and Applications
Shafi Goldwasser, Director, Simons Institute for the Theory of Computing
Many organizations—including health care organizations, educational institutions, and government agencies—have an ongoing need to collect sensitive information about individuals, but then also to share the results of analyzing that data while respecting the individuals’ privacy. Statistical disclosure limitation is an old field, but the past two decades have seen numerous demonstrated failures of traditional statistical disclosure limitation paradigms, most notably “de-identification” and naive anonymization. A rigorous foundational approach to private data analysis has emerged in theoretical computer science in the last decade, with differential privacy and its close variants playing a central role. The resulting body of theoretical work draws on many scientific fields: statistics, machine learning, cryptography, algorithms, databases, information theory, economics and game theory. This research project will advance core research on privacy and to foster new collaborations between researchers who work on theoretical aspects of data privacy and those working in areas of potential applications.
Deep Fairness in Public Policy
Anil Aswani, Assistant Professor, Department of Industrial Engineering and Operations Research (IEOR), UC Berkeley; Mahbod Olfat, PhD Candidate, IEOR, UC Berkeley
The proliferation of automated decision-making systems has yielded much commercial success, but the potential of such systems to systematically generate biased decisions threatens to exacerbate the vulnerability of certain subgroups. Especially as the aim of machine learning algorithms shifts from making predictions for consumption by humans to making the very decisions themselves, it becomes critical to design algorithms that are robust to bias and ensure their propagation in relevant areas. This renewal project will focus on the development of a hierarchical framework for fair machine learning that extends to classification, unsupervised learning, and decision problems. By targeting the “score functions” that underlie many machine learning algorithms, this framework is able to obtain solutions that are more fair and more robust to noise in data. This renewal will study the problem of using the developed framework to ensure fairness in key social science and public policy domains, from the fair placement of heart defibrillators to the allocation of resources in law enforcement.
Design of Secure Future Mobility Solutions
Alice Agogino, Professor, Mechanical Engineering, UC Berkeley; Euiyoung Kim, Postdoctoral Design Fellow, Jacobs Institute for Design Innovation, Department of Mechanical Engineering, UC Berkeley
The goal of this research is to gain knowledge about cybersecurity vulnerabilities in emerging mobility technologies, such as autonomous vehicles, onboard sensors, monitoring systems, and customizable and shared car services. This research team will examine the vulnerability of these new technologies when combined with data breaches and intelligent data mining malware. They will work with an alliance of autonomous vehicle manufacturers to conduct mixed research methods from i) an experiment in a real size vehicle to gain the knowledge on what kinds of sensitive user data can be carelessly collected by connected sensors and vehicles, ii) observations and interviews to collect users’ perception on the experience, and iii) expert interviews with professionals in automotive industry to understand their awareness of the cybersecurity in future mobility solutions.
Their work will help educate the next generation of cyber-talented designers through cybersecurity curricula and design guidelines, and continue to work with transportation manufacturers to promote cybersecurity concerns in their concept development of new mobility solutions.
Developing Graduate Pedagogy for Tomorrow’s Engineers
Thomas Gilbert, PhD Candidate, Machine Ethics and Epistemology, Center for Human-Compatible AI, UC Berkeley; Sarah Dean, PhD Student, EECS, UC Berkeley; Roel Dobbe, Postdoctoral Researcher, AI Now Institute; Nitin Kohli, PhD Student, School of Information, UC Berkeley; McKane Andrus, Undergraduate Researcher, Center for Human-Compatible AI, UC Berkeley
This team will organize a convening at UC Berkeley to reorient cybersecurity research and pedagogy toward protecting civil institutions through robustness, fairness, and systems theory. UC Berkeley’s Graduates for Engaged and Extended Scholarship around computing and Engineering (GEESE) will lead a research cluster to produce concrete reforms and interventions in R1 graduate curricula. Their aim is to train the next generation of Ph.D. graduates to tackle the security problems posed by autonomous systems in distinct social spaces. This initiative seeks to empower Berkeley’s leading disciplinary voices to reimagine the cybersecurity landscape, revealing new opportunities for conceptual integration, specialization, and cross-fertilized growth. Through this seed grant, the researchers will develop a strategic plan and policy white-paper comprising 3-5 policy recommendations from the research cluster, to be shared with affiliates at other R1 institutions, the National Science Foundation, and think tanks focused on improving the cybersecurity talent pipeline.
Enabling Online Anonymity for Vulnerable Individuals and Organizations
Venkatachalam Anantharam, Professor, EECS, UC Berkeley
Anonymity is needed by vulnerable entities, particularly in closed societies where individuals and organizations are likely to face severe consequences for their beliefs or actions expressing independent opinions, if their identities were known. On the Internet, the commonly deployed anonymity systems are based on the concept of Chaum Mixes, the most popular of these being Tor, which is used by about two million people every day. Mix nodes can be viewed as putting data packets into envelopes, while also delaying them by random amounts, to reduce the ability to associate specific incoming packets with specific outgoing packets. Recently a novel technology, called transactional mixing, has been proposed, based on peer-to-peer keys between the individual Mixes rather than public key cryptography. This project is aimed at developing this technology to improve the ability to provide anonymity based on Mixes.
Evaluating Privacy and Security of Emerging Technologies for Older Adults
Alisa Frik, Postdoctoral fellow, EECS, UC Berkeley; Florian Schaub, Assistant Professor, The School of Information, University of Michigan; Primal Wijesekera, Postdoctoral fellow, EECS, UC Berkeley; Serge Egelman, Director of the Berkeley Laboratory for Usable and Experimental Security, International Computer Science Institute (ICSI)
In recent years, older adults have been increasingly involved in using emerging devices, especially in the domain of healthcare, such as wearable and in-home health management, safety monitoring, fall detection, and emergency alert systems. These technologies aid in improving elderly people’s safety and health, and support independent living. But they collect vast amounts of potentially sensitive information and are often connected to each other or to the Internet, and therefore pose serious privacy and security risks, to which older adults are particularly susceptible. This team will pursue a research agenda aimed at developing recommendations for designing effective systems that will empower informed decisions, allow for better control over personal data, and improve security for elderly users. The researchers will use structured interviews with developers of emerging healthcare technologies, usability experiments with older adults, and security threat assessment via network traffic analysis.
Hackers vs. Testers: Understanding Software Vulnerability Discovery Processes
Primal Wijesekera, EECS, UC Berkeley; Serge Egelman, ICSI, UC Berkeley; Noura Alomar, ICSI, UC Berkeley; Amit Elazari, UC Berkeley School of Law
White-hat hackers (bug hunters) play an important role in identifying security vulnerabilities through bug-bounty programs. Yet white-hat hackers, however, haven’t received the attention they deserve from the security research community. Bug hunting is still portrayed as an ad-hoc process with very minimal empirical evidence showing why bug hunters are successful or how they are different from traditional software testers. The goal of this research team is to fill that knowledge gap and correctly understand white-hat hackers (bug hunters). They hypothesize that there are concrete differences between white-hat hackers and traditional software testers and penetration testers; white-hat hackers have proven to be a significant tool for increasing security in deployed systems by finding a variety of hidden bugs that would have otherwise been used by malicious actors. The researchers aim to uncover the reasons for that success scientifically. Understanding their approaches could help developers write more secure code, reducing the probability of introducing a vulnerability in the software and making software testing more efficient.
Industrialization and Economic Statecraft in the Data Age
Naazneen Barma, Associate Professor, Naval Postgraduate School, Visiting Scholar, UC Berkeley Center for Long-term Cybersecurity
The central goal of this research effort is to build the analytical framework and collect basic data for a new a research agenda on industrialization and economic statecraft strategies in the context of the global data economy. There is active and rigorous political economy scholarship devoted to the question of ‘what can and should we do to data?’ in terms of regulating its collection and use. Much less systematic thought has been paid to the question of ‘what will data do to us,’ in terms of our global political and economic relationships. The latter question will be the focus of this research project.
Learning Photo Forensics
Andrew Owens, Postdoctoral Researcher, EECS, UC Berkeley
Advances in photo editing and manipulation tools have made it significantly easier to create fake imagery. Learning to detect such manipulations, however, remains a challenging problem due to the lack of sufficient amounts of manipulated training data. This research initiative will focus on developing new, sample-efficient learning methods that can learn to detect fake images with minimal labeled training data.
The Mice that Roar: Small States and the Pursuit of National Defense in Cyberspace
Melissa K. Griffith, PhD Candidate, Department of Political Science, UC Berkeley
As the medium of global conflict comes to encompass digital weapons alongside conventional ones, a surprising set of actors emerge as the leaders in national defense. States like Estonia, Finland, Israel, and Singapore rank among the most secure and comprehensive in their capacities to provide national cyber defense to their populations. This project, therefore, examines (1) the components of cyber capability and cyber vulnerability driving national defense needs and given that, (2) how these states, these ‘mice that roar,’ allocated resources in an effort to attain capabilities and address particular vulnerabilities. Ultimately, this research aims to illustrate that the resources states need to deploy in order to defend against an ongoing attack or recover from a previous attack are largely housed outside the military and even the government itself. As a result, national cyber defense requires a societal defense approach: states must simultaneously structure national defense in a manner that integrates both public and private actors and does not rely on the military or intelligence agencies as the sole or even primary actors.
Mobile App Privacy Analysis with AppCensus
Serge Egelman, Research Director, Usable Security and Privacy Group, ICSI, EECS, UC Berkeley; Kenneth Bamberger, Professor, UC Berkeley School of Law; Narseo Vallina-Rodriguez, Assistant Research Professor, Internet Analytics Group, IMDEA Networks; Irwin Reyes, Researcher, Usable Security and Privacy Group, ICSI; Primal Wijesekera, Postdoctoral Researcher, Usable Security and Privacy Group, ICSI; Amit Elazari, Lecturer, UC Berkeley School of Information
Over the past several years, this research team has developed infrastructure that provides an unprecedented view into the privacy behaviors of Android apps. AppCensus is a dynamic analysis testbed that combines bespoke instrumentation within the operating system itself with sophisticated network analysis tools, which allows us to detect exactly when applications attempt to access sensitive user data and then monitor with whom it is shared. As a case study last year, the team used this infrastructure to examine children’s apps’ compliance with the Children’s Online Privacy Protection Act (COPPA) and found that a majority of applications in the Google Play Store appear to be violating this federal law. For 2019, this team will perform additional research using this existing infrastructure.
New Frontiers in Encryption Technologies: Removing Central Authorities from Advanced Encryption Systems
Mohammad Hajiabadi, Postdoctoral Researcher, EECS, UC Berkeley
Public-key encryption, a basic tool in cryptography, has been used for decades to provide security for encrypted communications. In order to encrypt to a user, one first needs to obtain the user’s public key. In today’s world, with the size of organizations growing, the use of mere public-key encryption techniques, which requires knowledge of public keys of individual users, becomes increasingly prohibitive and calls for expensive key-management infrastructure. One solution is Identity-Based Encryption (IBE), an encryption system that allows one to encrypt to a user just by knowing the user’s identity, as opposed to the user’s public key. But while IBE systems simplify the task of key management, they come with one major issue, referred to as the key-escrow problem: A central authority in the system is now in possession of all secret keys of the users, and may read their messages at will. This research initiative will introduce the concept of registration-based cryptography, which aims to remove the key-escrow problems from IBE and related technologies. The researchers will tackle problems related to designing robust and efficient registration-based encryption systems, which have applications in situations where controlled decryption access to encrypted information is required (e.g., government tax center systems).
Privacy for Always-Listening Devices
David Wagner, Professor, EECS, UC Berkeley; Serge Egelman, Research Director, ICSI
Microphone-equipped Internet of Things devices, and smart voice assistants specifically, offer the promise of great convenience, yet pose grave privacy challenges. The aim of this research is to understand the privacy implications of voice as a sensitive data source, and to develop techniques to help users protect their privacy from these always-listening devices. More specifically, the goal is for users to be able to specify restrictions on these devices—what should they be able to hear and what is off-limits—and to develop a system capable of enforcing those preferences. The researchers will study people’s expectations for these devices, design natural interfaces for specifying these constraints, investigate techniques for enforcing these preferences, and evaluate the effectiveness and usability of the proposed approaches.
Privacy Engineering: Education and Training
Daniel Aranki, Postdoctoral Researcher, EECS, Lecturer, School of Information, UC Berkeley
Preparing cybersecurity professionals for the workforce requires providing education and training in both security and privacy. Cybersecurity curricula in the U.S. are lacking a technical privacy syllabus, often dubbed Privacy Engineering. To bridge this gap, these researchers have been working for the last few years to build a foundational course in privacy engineering, at a graduate level. The course—Introduction to Privacy Engineering—was filmed and is ready to be piloted as an advanced course in the Master of Information and Cybersecurity (MICS) program in the School of Information at UC Berkeley. The work, however, is just getting started. CLTC’s funding will enable the team to further develop the course toward its final version, based on the experience from the first pilot offering, and will enable them to offer the same course in the School of Information and College of Engineering as an offline graduate-level course.
Privacy-preserving Federated Learning on Blockchain and its Application on System Anomaly Detection
Dawn Song, Professor, EECS, UC Berkeley; Min Du, Postdoctoral Fellow, EECS, UC Berkeley; Jian Liu, Postdoctoral Fellow, EECS, UC Berkeley
Machine learning technology is developing rapidly and has been continuously changing our daily lives. However, a major limiting factor that hinders many machine learning tasks is the need of huge and diverse training data. Crowdsourcing has been shown effective to collect data labels with a centralized server. The emergence of blockchain technology makes a decentralized platform possible, which provides better reliability and discoverability. While blockchain provides an ideal platform for crowdsourcing, all data become publicly available once being put onto today’s blockchain platform, such as Ethereum. This could discourage users from contributing their data, which may contain highly sensitive information, e.g., medical records. This research project will focus on designing a blockchain-based data sharing and training platform that allows participants to contribute data and train models in a fully decentralized and privacy-preserving way. The researchers will use system log anomaly detection to demonstrate the wide applicability of the proposed platform.
Protecting Consumers Against Phone Phishing
Michelle Chen, Student, Master of Information Management and Systems (MIMS) program, School of Information, UC Berkeley
As people have more information about themselves available online, and with more data breaches occurring, it is simple for scammers to use credible information to build trust and initiate spear phishing attacks, leaving the victim with their personal identity information compromised and/or financially distressed. Social engineering attacks are difficult to combat, and especially difficult to recover from in terms of financially, psychologically, and more. How might we help people protect themselves from these phishing scams, particularly when they use social engineering tactics? This project will explore potential solutions to better protect consumers based on data collected on “vishing” (voice phishing) attacks. The first phase will entail information gathering through in-depth interviews and scraping online sources, and the second phase will include designing a prototype of a tool for consumers to protect themselves against social engineering tactics and testing its plausibility with users.
Public-Private Data Relationships: Understanding the Everyday Processes
Yan Fang, PhD Student, Jurisprudence and Social Policy, UC Berkeley
Over the past two decades, Internet technology companies have developed products and services that collect large amounts of information about people. Government agencies at the federal, state, and local levels often seek these user data for law enforcement purposes, yet such data are increasingly held by commercial firms. How do firms’ collection of user data affect law enforcement? This project explores this question through interviews with staff at law enforcement agencies and at Internet technology companies.
Secure Machine Learning
David Wagner, Professor, EECS, UC Berkeley
This research initiative is focused on studying the security of deep learning and how to harden machine learning against attacks. The goal is to provide a more robust foundation for applications that use machine learning in settings where security is necessary. The use of machine learning to support automated decision-making is on the rise; but this introduces the risk that attackers will learn how to manipulate those decisions by exploiting specific weaknesses of machine learning. Research over the past few years have demonstrated the susceptibility of modern machine learning techniques to such attacks; this project will focus on studying defenses against these attacks.
Smart Home Surveillance of Domestic Employees
Maritza Johnson, Senior Researcher, International Computer Science Institute (ICSI); Julia Bernd, Researcher, ICSI; Alisa Frik, Postdoctoral Researcher, EECS, International Computer Science Institute
This research aims at developing a more comprehensive understanding of how the expanding use of smart home devices affects the privacy of individuals who did not choose to deploy them—and may not even be aware of them—with a focus on the privacy of in-home employees, particularly nannies. The researchers will conduct studies with both nannies themselves and parents who employ nannies, to identify within each group common experiences, expectations, and attitudes about the privacy ramifications of domestic surveillance, and potential points of intervention. Findings from the studies will support guidelines and recommendations for developers of smart-home devices and for policymakers, as well as public-education materials for domestic workers themselves and those who employ them.
Speculating “Smart City” Cybersecurity with the Heart Sounds Bench: Détourning Data and Surveillance in Public Space
Noura Howell, PhD Candidate, UC Berkeley School of Information
Visions of the “Smart City” promise safety and efficiency using sensors, data, and technology. While data-driven approaches often claim to escape social prejudice with objective insights, they can instead bolster inequity and create cybersecurity threats for vulnerable populations. This initiative will focus on creating artistic yet fully functional technologies, with a goal to foster critical discussion and radical re-imagining of the role of sensing and data in smart city visions. Among the planned projects is a “Heart Sounds Bench” that amplifies the heart sounds of those sitting on it. In contrast to other technologies using heartbeat data to categorize emotions or suggest crime or health risks, the bench simply invites rest, listening, and sharing space with others.
Towards Efficient Data Economics: Decentralized Data Marketplace and Smart Pricing Models
Dawn Song, Professor, EECS, UC Berkeley; Ruoxi Jia, Postdoctoral Researcher, EECS, UC Berkeley
Advances in machine learning and artificial intelligence have demonstrated enormous potential for building intelligent systems and growing knowledge bases. However, the current data marketplaces are not efficient enough to facilitate long-term technological and economic advancements. Big companies analyze user data to improve product design, customer retention, and initiatives that help them earn revenue; however, the users who contribute data are unrecognized and uncompensated. The inefficiency of the current data market is in part due to the centralized data curation model; more importantly, there is the little consensus as to how to determine the value of data, which will otherwise empower the legislature to regulate the data market. In this project, the researchers plan to investigate the theoretical and algorithmic foundation for data valuation and implement the results from the theoretical studies in a blockchain-based decentralized data marketplace to help manage transactions of patients’ data in a clinical study.
Using Multidisciplinary Design to Improve AI/ML Cybersecurity Scenarios
James Pierce, Research Engineer, CITRIS, UC Berkeley, Assistant Professor, Design, California College of the Arts; Richmond Wong, PhD Candidate, School of Information, UC Berkeley; Tara Shi, Student, Master of Architecture, College for Environmental Design, UC Berkeley
The overarching research question guiding this project is: How can multidisciplinary design methods, perspectives, and forms be applied to improve existing artificial intelligence (AI) cybersecurity scenarios, predictions, and extrapolations produced by researchers, market analysts, government organizations, and industry experts? This research will begin by collecting and carefully reviewing and organizing existing academic, industry, and government reports, articles, and publications that involve scenarios, proposals, and predictions for AI cybersecurity. This research will then use design to expand and refine these scenarios. A set of scenarios will be produced in the form of print and digital images, text, animations, videos, and multimedia formats. These scenarios will then be used to engage in further conversations with experts and non-experts in the areas of AI and cybersecurity, law, policy, political science, technology, design, and ethics. The scenarios will then be collaboratively refined and documented through a publicly available online repository as well as print publications.
Projects Jointly Funded with the Center for Technology, Society & Policy
Coordinated Entry System Research and Development for Alameda County’s Continuum of Care
Zoe Kahn, PhD student, UC Berkeley School of Information; Mahmoud Hamsho, Amy Turner, Yuval Barash, and Michelle Chen, MIMS students, UC Berkeley School of Information
Governments are increasingly using technology to allocate scarce social service resources, like housing services. In collaboration with Alameda County’s Continuum of Care, this project will use qualitative research methods (i.e. interviews, participatory design, and usability testing) to conduct a needs assessment and system recommendation around “matching” unhoused people to appropriate services. The goal is to identify matching systems that suit the needs of diverse housing service providers across the county without compromising the needs and personal information of vulnerable populations. In addition to efficiency, the researchers will consider how systems handle values such as privacy, security, autonomy, and resiliency.
Engaging Expert Stakeholders about the Future of Menstrual Biosensing Technology
Noura Howell and Richmond Wong, PhD candidates, UC Berkeley School of Information; Sarah Fox, Postdoc, UC San Diego Department of Communication and The Design Lab; and Franchesca Spektor, Undergraduate, UC Berkeley
Networked sensor technologies are increasingly present in daily life. While promising improved health and efficiency, they also introduce far-reaching issues around cybersecurity, privacy, autonomy, and consent that can be difficult to predict or resist. This project will examine menstrual tracking technologies as a case for understanding the current and near-future implications of increasingly pervasive techniques of intimate data collection. These technologies collect sensitive data (e.g., menstrual flow quality, medicine use, sexual activity) and predict period dates and fertility. Last year, the researchers reviewed privacy policies of current menstrual tracking applications, which informed the design of speculative near-future technologies exploring surveillance concerns. This year, they will engage expert stakeholders of menstrual tracking around these speculative designs to broaden the discussion of cybersecurity, privacy, and fairness concerns. They will share our research findings with a broad audience to help scaffold the collective reimagination and reconfiguration of intimate biosensing.
Factors Affecting Trust Among Vulnerable Populations
Rajasi Desai and Varshine Chandrakanthan, MIMS students, UC Berkeley School of Information
This project aims to understand the trust dynamics and the factors affecting trust for vulnerable populations like human rights defenders, activists, and journalists who document and upload sensitive media, as well as people who receive this media in order to use it as evidence. The researchers will work to understand the ecosystem in which at-risk populations operate and then discover potential areas where trust plays a pivotal role. Finally, they will suggest potential factors that play a pivotal role in shaping trust in applications for at-risk populations.
Re-imagining Password Management for Low-Technology Proficiency Users
Ching-Yi Lin, Ayo Animashaun, Jing Wu, and Amy Huang, MIMS students, UC Berkeley School of Information
Passwords and login information control access to some of the most important aspects of life, such as banking and finances, medical services, and other sensitive personal information. According to Pew Research, 44% of online adults ages 30 to 64 say they have a hard time keeping track of their passwords. These “password-challenged” internet users are more likely to keep track of their passwords by writing them down on a piece of paper, saving them in a digital note, or by saving them in their web browser, all things that are considered less desirable practices among cybersecurity experts. This research project aims to design a solution that constructively engages competing values of security and ergonomics as it relates to the development of password management systems. The researchers will integrate concepts from areas such user experience design, privacy & security, and behavioral economics to develop a tool that achieves balance among these competing values. Their objective is to improve password generation habits with a tool that strengthens digital security and reduces the potential for breaches and privacy harms.