In 2015, CLTC developed a set of scenarios depicting various “cybersecurity futures” for the year 2020. Now, as the year 2020 has arrived, Professor Steve Weber, Faculty Director for CLTC, has written a post on the CLTC Bulletin assessing those scenarios, including what we foresaw — and what we didn’t.
“The point of this assessment is not to score the accuracy of our predictions, since that was never the purpose of the work in the first place,” Weber wrote. “The point is to learn about the evolution of the cybersecurity world — and about ourselves — by assessing what kinds of causes and implications we saw clearly and, more importantly, what we failed to foresee and why. What did we overemphasize, and what did we underemphasize? Do we have systematic blind spots, and can they be corrected? What are the most important hypotheses that we can take forward as we construct research with an eye toward 2025?”
Weber notes, for example, that the 2015 scenarios “postulated an overall rate of change that was faster than what we actually observed,” which he noted is uncommon in scenario thinking, “where it’s more often the case that the world changes faster than expected.” We also overestimated the market value of data, Weber wrote, as we “didn’t quite understand how issues about bias and other flaws in data sets would become so prominent and begin to restrict how firms and government agencies would be ‘licensed’ by many societies to use these technologies. At the same time, we overestimated the degree to which public infrastructure would seize the opportunity to incorporate IoT and related digital technologies to improve public services.”
At the same time, Weber’s analysis calls out some of the trends that the 2020 scenarios foresaw, including the rise and evolution of algorithmic authoritarianism — the use of digital systems for surveillance and control — in China and other nations. “We argued that a set of techniques, tactics, and technologies would be packaged into an exportable tool kit that would have widespread appeal among governments, and that has indeed happened,” he wrote. “We foresaw many aspects of the push-back against the large platform firms, even though what is now called a “techlash” was barely nascent in 2015.”
Weber also detailed some of the “lessons learned” for anticipating how technology and human interaction will evolve. “I’ll try harder not to be overly distracted by the bright shiny objects of emergent technologies and pay just as much attention to quotidian security issues as they morph and change shape. I’ll be thinking much harder about how traditional geopolitics shape the digital world, just as much as the digital world reshapes geopolitics. I’ll have my eyes out for the broad-based social movement(s) — not yet visible, but possibly nascent — that would remake cybersecurity by adding unconventional and unexpected players to the game.) And I resolve to spend more effort on the human dimensions of technology, including the emotional component of that landscape.”
“Security professionals often remind us that the human is the weakest link — but in practice, the human is also the most complicated link and the hardest to understand,” Weber wrote. “No matter how many billions of transistors can be put onto a chip and how many billions of data points can be processed in a machine-learning training set, in 2025, human beings will still be the most complex variables in the cybersecurity equation.”