The need for long-term strategic foresight in cybersecurity has never been greater. Those who pursue it will have the advantage.
On September 27, the Center for Long-Term Cybersecurity (CLTC) hosted a special event to kick off Cybersecurity Futures 2030, a multi-phase scenario planning project aimed at exploring challenges and opportunities that could emerge from the continued evolution of digital technology. Held at Swissnex, on San Francisco’s Pier 17, the event featured a panel of leading experts from diverse disciplines discussing how we can begin to address a range of threats that could arise over the next 5-7 years.
“The Center for Long-Term Cybersecurity is UC Berkeley’s home for research and teaching on the frontier of digital security, and we hope to do that for Silicon Valley more broadly,” explained Andrew Reddie, Assistant Professor of Practice at the UC Berkeley School of Information and Co-faculty Director of CLTC, who moderated the panel. “This event today is a kickoff effort to get at some of what we think are the fundamental cybersecurity questions that we should be worried about come 2030.”
Cybersecurity Futures 2030 represents the third major scenarios-based project led by CLTC since its founding in 2015; the first two set of scenarios focused on 2020 and 2025, respectively. Cybersecurity Futures 2025 reached thousands of decision-makers in a dozen countries, and was translated into Chinese by the Taiwan Information Security Center. CLTC anticipates a similarly ambitious roster of Cybersecurity Futures 2030 collaborators and global presentations. (CLTC is currently seeking partners and supporters for the Cybersecurity Futures 2030 project; visit this page for more information.)
Deepfakes and the Future of Disinformation
Hany Farid, Professor in the UC Berkeley Department of Electrical Engineering & Computer Science and the School of Information, began the evening’s presentation with an eye-opening keynote entitled “Deepfakes and Disinformation Circa 2030.” Farid’s research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception; he is one of the world’s foremost experts on “deepfakes,” videos, images, and other media created with artificial intelligence-based technologies.
Farid explained how his laboratory at UC Berkeley focuses on using computer models to analyze videos to determine whether they are authentic or not. He pointed to examples like the website thispersondoesnotexist.com, which displays human faces created by artificial intelligence, and deepfake videos of Tom Cruise and Steve Buscemi blended with Jennifer Lawrence.
“The thing that really keeps me up at night is the so-called ‘Liars Dividend,’” Farid said. “If we enter a world where any image, audio, or video that you see online can be fake, nothing has to be real. This is going to create alternate realities that are already starting to happen because of social media and disinformation and misinformation. It’s going to make it much much worse.”
He warned that the continued proliferation of misinformation on the internet could deeply damage democratic institutions. “The real problem is not that you believe something that is false,” Farid said. “It’s that with that belief comes the erosion of trust in governments and scientific experts and the media…. You have an erosion in the very institutions that you need to have a democracy. And that should worry everybody.”
Panel Discussion: Looking over the Horizon
Following the keynote, Andrew Reddie led a panel discussion that included Farid, as well as Ruby Booth, an InfoSec researcher at Sandia National Laboratories who specializes in the interaction between human behavior and cybersecurity, and Juliana Friend, a postdoctoral fellow who works at the intersection of tech policy and health equity at the Institute for Health Policy Studies (IHPS) at the University of California, San Francisco.
Reddie asked the panelists to identify key “critical uncertainties,” social, cultural, political, technical, economic, and military dynamics that could change the landscape for cybersecurity going forward.
Booth said she is concerned with how the “ubiquity of technology is changing the legal space in which we all live,” particularly due to the proliferation of always-on listening devices in homes and other private spaces. “We really look at the convenience of some of these things, and we don’t look at their impact,” Booth said.
She also said she is concerned about the potential for algorithms used by institutions to reinforce existing human biases; for example, machine learning-based technologies used by law enforcement agencies may have been skewed samples of images that could integrate racial bias. “What we’re doing is reifying historical bias, but shading it in the moral neutrality of technology,” Booth said.
Booth also flagged the serious potential for cyber conflict to lead to war, whether provoked by governments or non-state-actors. “The thing that keeps me up at night is the fear that we’re going to fall backwards into war because of cyber,” Booth said. “We don’t have international norms around cyber in any meaningful way…. [Viruses or malware] can go into the wild, and can have consequences that you don’t expect.”
The process of developing scenarios for 2030 should focus on anticipating such challenges, Booth said, but should not assume that human behavior will fundamentally change. “One of the areas where futurists tend to go wrong is that we tend to believe that people are going to be different tomorrow than they are today,” she said.
Hany Farid noted that the misinformation challenge will continue unless the “underlying algorithmic amplification” of platforms like TikTok and Facebook are held in check. “Their algorithms are designed to maximize the amount of time you spend on a platform to deliver ads to make money — whatever engages you, good, bad, ugly, or illegal.”
He stressed that, to address a challenge like misinformation, everyone needs to play a part, including regulators, social media platforms, as well as internet users themselves. “I don’t want to be a naysayer, I’m not anti technology,” Farid said. “It’s about looking for solutions, whether those solutions are technical, or government regulation, and thinking about how those can play together to make internet technology work more for us, and less against us.”
Juliana Friend pointed out that some of the threats to our privacy and rights come from everyday use of technologies like Google or text messages, which can be used by law enforcement to establish an intent to terminate a pregnancy. “We pay very warranted attention to ever-new and ingenious technologies and the risks those bring, but it’s also important to look historically at how the most basic technological processes can make us vulnerable, and whom those vulnerabilities disproportionately affect,” Friend said.
Friend, who received her PhD in Anthropology from UC Berkeley and contributed to CLTC’s Alternative Digital Futures Project, stressed that it will be important for the Cyberfutures 2030 project to integrate the perspectives of different communities around the world.
She explained that her research focuses in part on Senegalese activists who practice sex work, which has led her to challenge her own assumptions about what security means. “I’m an anthropologist by training, and so I’m interested in digital security as a lived experience,” she said. “For many members of this community, the social risks of image-based abuse — of a digital security breach — outweigh the physical risks of transmissible diseases or even intimate partner violence. I realized that digital security can be a health issue, an issue of survival. The stakes are that high.”
Next Steps for Cybersecurity Futures 2030
Cybersecurity Futures 2030 is closely aligned with the Center for Long-Term Cybersecurity’s mission to amplify the upside of the digital revolution, help decision-makers act with foresight, and expand who has access to and participates in cybersecurity. Following this successful kickoff event, the project will continue over the next year and beyond, as we will be orchestrating a series of workshops and other convenings with stakeholders around the world, with the ultimate of helping decision-makers in government, industry, academia, and civil society anticipate and address tomorrow’s cybersecurity challenges.
CLTC is actively engaging companies and organizations with partnership opportunities for Cybersecurity Futures 2030. Please visit the Cybersecurity Futures 2030 page or contact Matthew Nagamine at mnagamine@berkeley.edu to learn more.
See a photo gallery from the event: https:///flic.kr/s/aHBqjA9Jy1