On October 7, 2021, the Center for Long-Term Cybersecurity (CLTC) hosted our fifth-annual CLTC Research Exchange, a showcase of recent work from our community of affiliated researchers.
The theme of this year’s event, “Fostering Foresight,” reinforced how CLTC’s mission focuses on helping individuals, organizations, and governments act on foresight, and on expanding who gets to participate in — and has access to — cybersecurity. In addition to the research talks, the event included two panel discussions focused on promoting dialogue with industry practitioners and members of the public-interest cybersecurity community.
“This is always such a special event because it’s an opportunity to get a sneak preview of works in progress,” said Ann Cleaveland, Executive Director of CLTC, in her introductory remarks. “We’ll count ourselves successful if we’ve strengthened the bridge between cutting-edge academic research and cybersecurity challenges that practitioners who are joining us today from industry, government, and civil society are grappling with on the ground.”
This year’s event drew the largest audience ever to a CLTC Research Exchange, as nearly 200 audience members tuned in to the online event from around the world. The presentations spanned a diverse array of topics, ranging from how online games can teach children to develop stronger passwords, to how voters can best be convinced to support strong cybersecurity in urban infrastructure, to how children and other non-primary users of in-home digital devices may be vulnerable to privacy harms.
“The work of CLTC grantees and researchers is more important than ever,” Cleaveland said. “Since we founded CLTC, the threat landscape is evolving faster. Adversaries continue to have limitless creativity, and digital technology has literally reached into every aspect of human life.”
Gamification of Cybersecurity Education (Video)
The first research talk focused on a project by two alumni from the UC Berkeley School of Information’s Masters of Information and Cybersecurity (MICS) degree program: David Ng, Privacy Program Manager for Facebook, and Jacky Ho, Director of Security IT for UBS.
The researchers discussed how they used CLTC funding to launch a “Cybersecurity Game Jam,” an online contest that encouraged developers to develop games that could teach basic cybersecurity concepts to children. The contest received 43 entries in two weeks. “They are all online, they’re all free, you can go at any time,” said Ho.
The winning game, Cryptic Clash, encourages players to develop passwords that are stronger than their opponents, and the passwords themselves square off in a head-to-head battle. In the next phase of the study, Ho and Ng will conduct an experiment to determine whether grade-school students who play Cryptic Clash will develop better password habits than students from a control group.
The Cybersecurity of “Smart Infrastructure (Video)
The second talk was presented by Alison Post, whose CLTC-funded research focused on “smart city technologies,” through which local governments collect real-time data from interconnected devices and sensors to improve their operations. Post and her fellow researchers surveyed cybersecurity experts to rate different smart city technologies based on their underlying technical vulnerabilities, their attractiveness to potential attackers, and the potential impact of a successful serious cyberattack.
Post explained that the experts consistently flagged the same technologies, including emergency and security alert systems, street video surveillance systems, and smart traffic lights and signals. “These are the technologies that local governments should be more concerned about, and think harder about cyber protections, if they do choose to move forward with adopting them,” Post said. (Findings from the survey were detailed in a 2021 CLTC white paper, The Cybersecurity Risks of Smart City Technologies: What Do The Experts Think?.)
For the second part of the study, the researchers surveyed 7000 California voters about whether they would be willing to pay a surcharge on their water bill to finance upgrades to cybersecurity; they found voters are more likely to pay for threats that have potential to endure for many years. “This is potentially a useful way to approach voters who are not particularly informed about cybersecurity issues, but it’s not going to work with the full electorate,” Post said.
Considering Privacy and Security of Non-Primary Users in the Smart Home (Video)
The third talk, by Richmond Wong, a Postdoctoral Fellow at CLTC, described research focused on how “non-primary users” of smart-home devices, such as Alexa, Google Home, or Ring security systems, may be affected by these products’ potential privacy concerns.
“Our work highlights three potential social relationships among primary and non-primary users: parents and children, landlords and tenants, and residents and domestic workers,” Wong said. “The privacy of non-primary users is strongly affected by the actions of primary users and what their social relationship is. So if you’re a non-primary user, you likely don’t have access to the actual device, and you may not have the social power to push back against the primary users’ practices.”
Wong and his fellow researchers worked with students at the University of Washington to create a set of scenarios imagining future smart home use cases. For example, one scenario describes how a smart home security camera may double as a curfew detector, notifying parents when kids are sneaking out at night. For landlords and tenants, smart cameras and microphones could be used an an extreme monitoring of potential lease violations.
“This work helps us learn about design scenarios as a foresight tool, and helps us understand how users and everyday people might make sense of these potential future social and technical developments,” Wong explained. “It helps us think in more complex ways about who has access to privacy, from a design perspective, as well as from a law and policy perspective.”
Panel: The Future of Public Interest Cybersecurity Clinics (Video)
The first panel discussion at this year’s Research Exchange focused on public interest cybersecurity clinics, which are emerging as a vital resource in providing digital security services to non-profits, journalists, activists, and others who face online threats, but have limited resources to defend themselves online.
The panel was hosted by Andreen Soley, Director of New America’s Public Interest Technology University Network (PIT-UN), which comprises a network of 43 universities and colleges. “PITUN believes strongly in the need to connect students to the things they’re learning in the classroom with the real-world applications of their skills and knowledge out in the world,” Soley said. “We’re particularly keen to support and amplify projects that protect the most vulnerable among us, online and offline.”
Larry Susskind, Ford Professor of Urban and Environmental Planning at MIT, oversees the MIT Cybersecurity Clinic, through which students primarily work with local governments, hospitals, and other agencies that require digital security assistance. “On a confidential basis, we do a vulnerability assessment to each of these communities and suggest the kinds of things they might want to work on,” Susskind explained. “It’s very exciting now, because there’s a coalition of cybersecurity clinics at other universities, so we can learn from each other and share information.”
Tiffany Rad, CEO and Co-founder, Anatrope, Inc., teaches the UC Berkeley School of Information’s Citizen Clinic course, which provides students with real-world experience in assisting politically vulnerable organizations and persons around the world to develop and implement sound cybersecurity practices.
Rad explained that the Citizen Clinic’s clients face challenges not only in securing their networks, but also in staving off threats on social media, as well as protecting their physical safety. “When a nonprofit group is looking at how do we do cybersecurity, not only is it a daunting task, because there are so many threats out there, but the question is, where to start?” Rad explained. “Some of the products that are sold are beyond their budgets. We do a security assessment for clients and make recommendations that are practical and economical for them.”
Cybersecurity clinics are particularly effective when they include students from diverse backgrounds, Rad said. “We’re branching out and bringing in more and more students from different programs at Berkeley,” she said. “It’s not just all technical. You’ve got to look at the human elements of how people are using this cybersecurity. That’s why we’re expanding the program.”
Lily Lin, a project manager at Microsoft, spoke about the value for students in participating in a public interest cybersecurity clinic. Lin was a graduate of the School of Information’s Master of Information Management and Systems (MIMS) program, and she participated in the Citizen Clinic course. “I wouldn’t still be at Microsoft if it wasn’t for the clinic,” Lin said. “The thing that we keep coming back to is that technology needs to be easy for users.”
Assessing and Developing Online Election Information Infrastructure (Video)
In the next research talk, Emma Lurie, a PhD Student in the UC Berkeley School of Information, discussed research focused on the increasingly important role that platforms like Google and Facebook play in providing infrastructure for elections, particularly in how they make information available to voters. “From directions to polling places to information about candidates’ positions, the public relies on technology platforms like Google and Facebook to help navigate political participation,” Lurie explained. “As a result, technology platforms are increasingly entangling themselves with existing online election information infrastructure.”
Lurie developed case studies that highlight flaws in how these platforms provide information to voters. For example, she found that 10% of Google searches that seek to identify a California local representative are likely to mislead information-seekers about who their representative is. Lurie provides recommendations for steps Google can take to meet its stated commitment to providing high quality civic information.
“My work helps shift our thinking to consider the interaction effects of technology platforms and existing online election information,” Lurie said. “This shift helps us better understand the online election information ecosystem, develop voter-centered measurement techniques, and consider the ways that we can be proactive about developing a robust online information infrastructure.”
Centering the Data Privacy Perspectives of Safety-Net Patients (Video)
In her research talk, Laura Elizabeth Pathak, a PhD student in Social Welfare at UC Berkeley, presented research on the data and security concerns that “safety-net” patients have about health data collected about them. The research was based on a text messaging app that encourages physical activity among low-income minority patients.
“We use the term ‘safety net patients’ to encompass a group of patients who are low-income, tend to be underinsured or uninsured and are usually Medicaid or or Medicare recipients, and are disproportionately racial and ethnic minorities,” Pathak explained.
“Some of us assume that patients do not care about the privacy implications of collecting their health and location data, and underlying this assumption is another assumption: that having limited technological literacy hinders their understanding of the risks and benefits of mobile technology. How can we center their perspectives on these topics?”
Pathak’s study revealed that safety-net patients in fact have strong opinions about privacy that are shaped in part by their prior experience with security breaches, as well as their trust in the medical establishment. “It’s not just about technical knowledge, but the study shows there’s a huge role that cultural and historical context plays in the digital literacy of this patient population,” Pathak said.
The findings suggest that marginalized communities should play a direct role in the design of technology solutions. “It’s important to think about engaging with these populations — not designing around them, but with them,” Pathak said. “In doing that, we hope to center and respond to their opinions and improve cybersecurity practices and policies, which can help contribute to the integration of technology into clinical settings and ideally, help ameliorate some of the health disparities that we see.”
Explainability Won’t Save AI: Power Asymmetry in the Implementation of AI Principles (Video)
In her research talk, Jessica Newman, Research Fellow at CLTC and Program Director for the Artificial Intelligence Security Initiative (AISI), discussed her research on “explainable artificial intelligence” (XAI), by which the calculations or conclusions made by a machine-learning system can be readily understood or interpreted by humans.
“A lot of AI technologies are considered to be black boxes, meaning that there is not sufficient understanding of how and why a model provides the outputs of it does,” Newman said. “Explainability has become a core principle of responsible AI development that is referenced in hundreds of AI strategies. It also plays an important role in the realization of other core principles, such as fairness and safety. Because if you don’t know how a model works, it is extremely difficult to mitigate bias for harmful outcomes. So we are pinning a lot of hopes on the idea of explainability.”
Newman reviewed explainability research and publications from a broad range of different stakeholders, and found “significant variation,” but she also found three domains that had greater consistency: engineering, deployment and governance.
“The commonality across the domains was that explainability should provide assurance about the effectiveness and appropriateness of a system at achieving its intended task,” Newman said. “Notable differences included that only the engineering domain treated AI systems as constantly in flux and capable of regular improvement, while the other domains expected greater consistency to inform expectations and enable adherence with policies. Only the governance domain stressed the values of human agency and safeguarding against algorithmic bias.”
Newman pointed out that there is a “significant discrepancy” between the concept of explainability in principle versus how it plays out in practice, which risks tilting the balance of who benefits from AI systems. “Without clear articulation of the objectives of explainability from different communities, AI is more likely to serve the interests of those already in power,” she said. “By improving clarity about the diversity of explainability objectives. AI organizations and standards bodies can make explicit choices about what they are optimizing and why. AI developers can be held accountable for providing meaningful explanations and mitigating risks to the organization to users and to society at large. This is hugely important today, as women and people of color are disproportionately bearing the burdens of inaccurate AI systems. And it will only be more important in the future, as AI systems are further integrated into critical infrastructure and society at large.”
Panel: How Leaders Generate and Use Cybersecurity Foresight (Video)
The Research Exchange concluded with a panel discussion focused on how industry practitioners discussed the importance of foresight in defending against adversaries — a concept that has been central to the Center for Long-Term Cybersecurity’s work since its inception.
“When we started CLTC, there was this feeling that [cybersecurity professionals] were operating like emergency department physicians, just trying to deal with whatever comes in the door that day, trying to patch it up and get it working again,” explained Steve Weber, Faculty Director for CLTC, in introducing the panel. “But there often wasn’t anyone else out there to take care of the long-term problem. I wanted to develop a foresight perspective so that we could get out ahead of that emergency mindset.”
The purpose of the panel, Weber explained, was to “bring together a couple of folks who are sitting at the nexus of that challenge in different parts of the industry and learn from them.”
Dr. Alissa Jay Abdullah, Senior VP of Emerging Corporate Security Solutions at Mastercard, described that she came to the cybersecurity field via a career path that included wanting to be a radio disc jockey, an English major, and a math major. “We grow up thinking in boxes, that things have to fit a certain way, or that you have to go through a certain path to get somewhere,” she said. “And I will tell you, the adversary doesn’t have 25 years of cybersecurity experience with degrees and all those other things added on to it. We have to think differently about how we want to attack the adversary or defend our own networks and infrastructures if we want to win in the cyber war.”
Derek Manky, Chief of Security Insights & Global Threat Alliances for Fortinet, noted that the digital security challenge has evolved rapidly in recent years, and that approaches to security that used to be effective no longer apply. “What we didn’t really anticipate was how quickly this has evolved into a cybercriminal ecosystem,” Manky said. “It forced us to rethink threat intelligence. Instead of just looking at the weapons and the indicators, we need to backup with foresight and really think like a cyber criminal. We see that these weapons are coming up. What do the factories look like? How are these people connected? And what are their strategies and tactics?”
Claire Vishik, Fellow and GMT CTO at the Intel Corporation, spoke about how cybersecurity has become connected to so many other domains, including policymaking. “When we are talking about cybersecurity in the new environment, where we have anything from edge clouds to artificial intelligence to micro-devices and even systems of systems, like smart cities, it’s impossible to talk about cybersecurity without understanding how the systems work, and what effect they have on society. So working at the intersection of these areas is very important.”
The panelists also discussed the important role that diversity plays in achieving success in cybersecurity, as it can help improve foresight and think creatively about how an attacker might think. “We, who are on the right side of cybersecurity, are used to processes, procedures, plans, and things fitting the way they’re supposed to fit,” Abdullah said. “That is how we continue to miss the adversary hitting us from all of these different directions.”
Including diverse voices in the development of new technologies is essential to success, Abdullah said. “The adversary is attacking everyone, so we need to hear from a lot of different perspectives,” she said. “And if not, then the solutions that we build are going to be difficult for everyone to accept and learn how to use. When you don’t consider neuro-diversity and you don’t consider equity, then you really have a one-sided solution.”
CLTC would like to thank all the panelists and researchers who participated in this year’s Research Exchange! And many thanks to our supporters.