Blog Post / May 2025

Reflections on Cybersecurity Futures 2025: Looking Back from the Present

Cover of cybersecurity futures 2025 report
Download the PDF

In 2018, the Center for Long-Term Cybersecurity partnered with CNA’s Institute for Public Research to develop scenarios exploring potential “cybersecurity futures” for the year 2025. These scenarios were designed not as predictions, but structured thought experiments about how various forces — including technological advancement, economic interests, social movements, and government policies — might combine to create cybersecurity challenges in the near future. We developed these scenarios following consultation with industry experts, government officials, and academic researchers across multiple global regions, and sought to identify emerging trends that could reshape the cybersecurity landscape.

So now that the year 2025 is upon us, we’re ready to ask the question: How did we do?

This scenario “postmortem” serves two analytical functions. First, it helps us understand which signals we correctly identified as harbingers of future developments, and which we either missed or misinterpreted. Second, it reveals analytical biases and blind spots that we (and presumably others) held, and still might hold.

The objective of scenario thinking and reflection is to refine our collective capacity to anticipate emerging challenges in cybersecurity. Effective strategic planning in cybersecurity — whether focused on technology development, governance frameworks, human capacity building, or organizational resilience — depends on this anticipatory capability.

Key Insights

Our analysis revealed several key insights:

1. We need to better imagine (and convince others about) discontinuous change. While technological development often appears gradual, our scenarios correctly anticipated that certain technologies — especially AI and quantum computing — would advance through dramatic step-functions rather than smooth curves. The rapid advance of large generative AI models since 2022 exemplifies this pattern, as models’ capabilities appeared seemingly overnight, transforming entire industries and catching many regulatory frameworks unprepared. The signals of an imminent step-function were there and we saw them, but we largely failed to convince skeptics of their importance.

2. Advanced technologies have become sovereign assets in great power competition. Our “Quantum Leap” scenario anticipated that nations would treat advanced technological capabilities as foundational elements of national security. That’s not a revolutionary idea, but it has gone even further than our scenarios imagined. Today, nations place high value on not only quantum research, but also semiconductor manufacturing, AI model development, battery technology, and other advancements, with initiatives like the CHIPS Act making explicit the national security framing of core technology policies.

3. Public-private integration has accelerated beyond expectations, creating hybrid institutional forms in which technological development happens at the intersection of commercial and governmental contexts. This integration refers to the increasingly blurred boundaries between government and corporate capabilities, where technology development happens through hybrid institutional structures rather than traditional public or private domains. Organizations like OpenAI demonstrate this hybrid structure, operating a nonprofit entity with a for-profit subsidiary while partnering with defense agencies and commercial platforms. The U.S. government has called out China for “military-civil fusion,” but the pattern is hardly unique to China.

4. The mobility of people shapes technological development. Our scenarios focused on technological change and institutional responses. We failed to imagine how important global flows of talent would be to producing (and maintaining and governing) the technologies we discussed. The global migration of top talent, and all attendant fears around espionage and competition, have emerged as major driving factors in how cybersecurity is practiced. This influence is particularly visible in how cybersecurity expertise flows between organizations, creating both vulnerabilities (through insider knowledge transfer) and strengths (through cross-pollination of defensive techniques). The migration patterns of top security researchers and engineers directly affect which organizations can detect sophisticated threats, respond to zero-days, and develop robust security architectures.

5. Identity has become a battlefield in digital security, though not exactly as we anticipated. The fragmented mechanisms by which identity is captured and verified, along with emerging dynamics around algorithmic profiling and deepfakes, have created precisely the kind of contested space our “New Wiggle Room” scenario anticipated. Yet we underestimated how quickly synthetic media would weaponize ambiguity, creating a world in which excessive certainty (through surveillance and verification) exists alongside radical uncertainty (where even video evidence carries diminishing truth value).

A note: we developed these scenarios before the COVID-19 pandemic. While we now see the pandemic as primarily accelerating existing trends rather than fundamentally changing their paths, it did compress timelines, which heightened pressures on security architectures in ways these scenarios did not directly address or anticipate.

The remainder of this piece analyzes our 2025 scenarios in greater detail. First, we provide a summary of each scenario. Next, we provide some cross-cutting themes, detailing the trends we correctly identified as well as our blind spots. We close with implications for CISOs, policymakers, and other stakeholders, as well as future directions for strategic foresight work.

Our 2025 scenarios

Below are summaries of our 2025 scenarios, along with short videos produced to depict each one.

The full report can be downloaded as a PDF here.

Scenario 1: Quantum Leap

In this scenario, early breakthroughs in quantum computing (initially driven by U.S. military and intelligence projects) prompted attempts to establish a global non-proliferation regime for quantum capabilities. Those attempts failed. As quantum tech spread beyond state control to other countries and even criminal networks, the original “quantum powers” reversed course. Instead of containment, they accelerated sharing of quantum advances with allies, all while grappling with quantum’s role in both legitimate innovation and illicit activities.

Scenario 2: The New Wiggle Room

By 2025, the push to use secure digital tech, the “internet of things” (IoT), and machine learning to quantify messy human life led to an unexpected problem: the loss of human “wiggle room.” Hyper-precise data and surveillance began eliminating the small uncertainties and flexibilities that tend to smooth social interactions. In response, people sought new flexibility through adopting multiple, fluid digital identities. This proliferation of personas helped individuals regain some freedom but also created new security headaches as identity management and verification grew more complex.

Scenario 3: Barlow’s Revenge

In the wake of catastrophic security failures in the late 2010s, the world split into two competing models of internet governance by 2025. One model had governments step back and cede control to large tech firms (ironically fulfilling cyber-libertarian John Perry Barlow’s vision of a tech-controlled cyberspace). The other model embraced full-on digital nationalism, treating control of the internet as an explicit instrument of state power. The sharpest tensions emerged where these two approaches collided, as neither model could completely exclude the other.

Scenario 4: Trust Us

After digital insecurity nearly collapsed the internet economy, companies in this scenario turned over their cybersecurity to an AI-driven mesh network called “SafetyNet.” This autonomous network detected intrusions and patched vulnerabilities in real time, which successfully restored stability to digital operations. However, it also split cyberspace into two realms: the traditional internet (which is less secure but more private) and the SafetyNet environment (which is highly secure but heavily monitored). This divide raised difficult questions about whether the values originally meant to be protected — such as privacy, freedom, and trust — can survive in a world secured by omnipresent AI oversight.

Cross-Cutting Insights

Looking across our four scenarios, several cross-cutting themes emerged, in terms of both what we got right and what we missed. These themes reveal broader patterns in how cybersecurity has evolved over the past five years and highlight analytical blind spots that can inform future forecasting efforts.

Discontinuous Change

Quantum Leap envisioned a “great leap” in quantum computing; we hypothesized that progress would look more like a step function than a gradual curve.

Since we wrote this scenario in 2020, quantum computing has moved faster than contemporary forecasts anticipated. NIST moved up its timeline for transitioning to quantum-resistant cryptography. This move signals that analysts believe a sufficiently powerful quantum computer will become feasible sooner than their previous forecasts predicted, a change likely spurred by faster-than-expected advances in quantum computing. A recent breakthrough from Microsoft may well pave the way for practical quantum computing — just on time for our 2035 scenario timeframe.

Beyond quantum computing, we have seen step-function progress in AI: the advent of large, generative models has transformed entire industries seemingly overnight. To somewhat abuse a quote from HemingwayQuantum Leap correctly forecasted that technological progress tends to move “gradually, then suddenly.”

Advanced Technologies Become Sovereign Assets

Quantum Leap correctly anticipated that some subset of advanced technological capacities would be viewed as critical sovereign assets during great power competition. While we focused on quantum computing, technologies like microchips, AI, and batteries have all become subject to similar dynamics in the years since we wrote this scenario.

Quantum Leap also used nuclear proliferation as a model for state attempts to control the spread of quantum computing. ”Proliferation” has proved to be a decent metaphor for state control over advanced technologies. Like nuclear proliferation, Quantum Leap saw states’ efforts as having poor efficacy: they may or may not slow down competitors, but they rarely stop them. We have seen this dynamic play out in chips for AI; for example, export controls on top-of-the-line graphics processing units (GPUs) led to the development of DeepSeek’s model, which is less dependent on export-controlled chips.

We correctly identified technological competition as a theater of great power competition, and trade restrictions as a means of enacting that competition. Trade restrictions have become a key tool in US-China technological competition, and legislative efforts like the CHIPS Act have borne out this dynamic with the US-China relationship specifically. But we were focusing on emerging technologies at an early stage, and we missed the obvious point: the most immediately relevant technology in the market at the time of scenario writing would be where this dynamic would become visible most rapidly — i.e., in the chip-making sector.

Public-Private Fusion Accelerates

We failed to anticipate the velocity of change toward public-private integration. While the Quantum Leap and Barlow’s Revenge scenarios recognized that both state and commercial interests would shape technological development, we failed to anticipate how these interests would intertwine. The Biden administration’s embrace of industrial policy, coupled with a bipartisan shift toward national-security-oriented dirigisme, created a new paradigm that transcended traditional state-market distinctions.

This fusion is visible in how companies like OpenAI operate as hybrid entities that are structured as nonprofits with for-profit subsidiaries, while also partnering with DARPA, USAID, and national laboratories. We see similar patterns in how commercial satellite companies support military operations, and how AI firms pursue dual-use development paths.

On that note, our scenarios relied primarily on a binary distinction between state control and market governance, but open-source development has introduced a third logic that defies this dichotomy. This blind spot is evident in our treatment of AI governance, where we failed to anticipate how open-source foundation models would complicate regulatory approaches based on either state oversight or corporate accountability.

Meta’s release of Llama 2, and the release of subsequent open-source large language models like DeepSeek, have created pressure toward openness by establishing permissive licensing standards, enabling third-party auditing, and fostering community-based governance through platforms like Hugging Face. Meanwhile, companies like Anthropic and OpenAI have (for now) resisted this trend, suggesting potential bifurcation in the market. These divergent approaches represent competing theories about how to balance innovation with safety — a tension our scenarios did not adequately explore.

Governments have struggled to develop positions about AI systems: the U.S. funds open-source AI research through DARPA while simultaneously considering controls that would restrict its export; the E.U. has funded an OpenEuroLLM effort, blending its longstanding interest in both technical sovereignty and open standards. The open-source dimension introduces complex questions about attribution, liability, and enforcement that transcend the state/market binary that our scenarios employed.

Human Capital Matters

While Quantum Leap recognized the importance of technical talent (even depicting criminal cartels kidnapping top quantum scientists), we underestimated how human capital mobility would become a defining characteristic of technological development.

The flow of researchers and engineers between countries, companies, and institutions has proven to be at least as important as physical technology transfer or intellectual property controls. This trend is particularly visible in AI, where researcher movements between academiaindustry, and government have shaped the development and diffusion of capabilities. For example, key scientists moving between organizations like Google DeepMind, OpenAI, and Anthropic have transferred crucial knowledge about foundation model architectures, creating competitive advantages for their new employers while accelerating overall capability development. The competition for AI talent has become a proxy battle in great power competition, with visa policies and immigration rules becoming instruments of technology policy. A testament to this point: Deepseek released its model weights publicly, but confiscated the passports of some of its engineers.

This emphasis on human capital reflects an insight about technological development: knowledge often resides in people rather than in documented technologies or formal repositories, and controlling technology means controlling the human networks that create and understand it. (We revisited this dynamic in Cybersecurity Futures 2030; see Key Findings and Section 1.1).

Identity is a Battlefield

The New Wiggle Room identified ambiguity as a social lubricant, and anticipated how people might seek identity fluidity as a response to excessive precision. However, we missed a development: the weaponization of ambiguity through technologies like deepfakes and their second-order effects on epistemic security.

Synthetic media technologies have evolved not only to create individual instances of false content, but also to undermine trust in authentic content — what researchers term the “liar’s dividend.” When any unflattering recording can be dismissed as fabricated, the epistemological foundation for accountability erodes. This dynamic extends beyond individual identities to encompass questions about collective reality.

The contradiction we failed to anticipate is how generative AI would simultaneously create both excessive certainty (through behavioral prediction and identity verification systems) and radical uncertainty (through the proliferation of synthetic content). Organizations now face the dual mandate of establishing sufficient precision for operational security while maintaining enough epistemological flexibility to function in an environment where digital artifacts carry diminishing evidentiary weight.

Identity also hooks into concerns about privacy, which have evolved considerably since we wrote these scenarios in 2020. The New Wiggle Room anticipated that privacy concerns would peak, then fade, as people accepted the tradeoffs associated with sensing and prediction. This proved partly correct; baseline privacy concerns have diminished as a focal point of public discourse. However, we failed to anticipate how privacy concerns would transform rather than disappear, becoming subsumed within debates about AI governance.

Privacy and AI concerns have converged around issues like inferential privacy (protecting against what systems can infer about individuals, rather than what data they explicitly collect), algorithmic transparency (understanding how systems use personal data), and autonomous decision-making (maintaining human control over consequential choices). This convergence reflects an evolution in how people conceptualize their relationship with digital systems, moving from concerns about data collection to concerns about data usage and algorithmic autonomy.

Other Surprises

Finally, there were major real-world developments outside our scenario storylines that significantly impacted cybersecurity. These “wildcard” events were not explicitly covered in our 2020 scenarios, but they have shaped the 2020–2025 landscape.

The Pandemic

We didn’t see a pandemic coming. The COVID-19 pandemic accelerated existing trends toward digitalization and, in doing, catalyzed new security considerations. While the pandemic served primarily as an accelerant of existing trends rather than fundamentally altering their trajectories, it compressed their timelines and intensified pressures on security architectures.

The rapid shift to remote work environments created new exposure for many organizations, accelerating digital transformation initiatives while simultaneously expanding attack surfaces. However, the catastrophic security failures many predicted would accompany this transition largely failed to materialize. This resilience suggests that security capabilities had matured more than our pre-pandemic scenarios anticipated, with cloud security models proving more adaptable than expected to rapid organizational change.

The pandemic also revealed how digital infrastructure has become foundational to societal resilience. As physical interactions became constrained, digital systems assumed critical roles in maintaining economic activity, educational continuity, and public health response. This elevation of digital infrastructure’s importance further reinforced the sovereignty concerns expressed in scenarios like “Barlow’s Revenge,” as governments recognized the strategic importance of maintaining functional digital ecosystems during crisis periods.

Rather than introducing entirely new dynamics, the pandemic acted as a natural experiment that tested the robustness of existing digital systems and governance arrangements. The results of this experiment — the ability of most digital infrastructure to accommodate shifts in usage patterns and organizational models — highlight both the maturity of core internet technologies and the adaptability of security frameworks that had evolved in the preceding decade.

Ransomware: An Innovation in Monetization Strategies Drives Capability Diffusion

The dramatic evolution of ransomware represents another development that our scenarios did not sufficiently capture. While we anticipated further criminal adoption of advanced technologies, we failed to recognize how innovation in monetization would transform the cybercriminal ecosystem. Ransomware-as-a-service (RaaS) platforms democratized advanced attack capabilities, creating an efficient market for specialized criminal services and substantially lowering barriers to entry.

This development reveals a limitation in our approach: we focused primarily on technological capabilities rather than business model innovations. The ransomware phenomenon demonstrates how changes in economic organization — specifically, the development of specialized criminal value chains — can reshape threat landscapes more rapidly than technical innovations alone.

The most salient aspect of ransomware’s evolution has been its revelation of the political economy of cybercrime. The entanglement of cryptocurrency markets, insurance mechanisms, and state-sponsored groups has created feedback loops that traditional security frameworks struggle to address. That North Korean state-sponsored groups are funding weapons programs through cryptocurrency-facilitated ransomware exemplifies how cybersecurity, financial systems, and geopolitics have become inextricably intertwined.

This entanglement illustrates a pattern in which technical cybersecurity challenges increasingly manifest as problems of global governance and economic coordination. The challenge is not simply about mitigating ransomware attacks on a technical level, but addressing the complex ecosystem that enables their profitability, which requires simultaneous coordination across cryptocurrency exchanges, insurance markets, financial regulators, and diplomatic channels.

Supply Chain Vulnerabilities: The New Attack Surface

Our scenarios underestimated the growing centrality of supply chain attacks as vectors for high-impact compromises. According to the World Economic Forum’s Global Cybersecurity Outlook 2025, 54% of large organizations cite supply chain challenges as the greatest barrier to achieving cyber resilience.

The SolarWinds incident in 2020, when Russian actors compromised software updates to infiltrate thousands of organizations, demonstrated how adversaries could leverage trusted software distribution channels to bypass conventional security controls. These attacks exposed a form of offense-dominance, at least as it applies to supply chain attacks: while defenders must secure their entire supply chains, attackers need only compromise the weakest link to achieve their objectives.

This blind spot in our scenarios stems from an insufficiently expansive conceptualization of “the system” requiring protection. We focused primarily on direct attacks against organizational assets rather than the complex web of interdependencies that characterize modern digital ecosystems. As one participant in our Washington workshop observed, “digital transformation means systems transformation,” and the attack surface has been extended from core technical infrastructure to encompass “everything from human factors to the plastic knobs that go into servers.”

The implications for cybersecurity governance are profound: security can no longer be conceived as a property of individual organizations but must be understood as an emergent characteristic of interconnected systems. This shift challenges conventional governance approaches predicated on organizational boundaries and regulatory jurisdictions. Organizations increasingly recognize that securing their supply chains requires visibility across an expanding ecosystem of vendors, partners, and service providers — a task that exceeds traditional security frameworks and tools. Addressing this challenge may require new forms of collective security arrangements that transcend individual organizational perimeters.

Workshop Insights Revisited

The workshops we convened between 2018–2020 across global regions — from Palo Alto to Munich, Singapore to Geneva — produced insights that merit re-examination in light of developments in the ensuing years. These workshops revealed distinctive regional perspectives on cybersecurity governance that have persisted through 2025.

Lender of Last Resort?

When asked who would “save the day” if cybersecurity goes catastrophically wrong, participants in Palo Alto insisted that “it will have to be the large firms, since that is where the capability lies.” In Munich, the response reflected a different institutional landscape: “Europe lacks the firms, and we do not trust governments to respond, so we need a citizen social movement.” Singapore participants expressed greater confidence in state capacity: “It probably will not go that wrong, but if it does, the government is the fixer-of-last-resort.”

These divergent perspectives reflect enduring institutional and cultural asymmetries that shape cybersecurity governance approaches globally. The absence of a recognized “lender of last resort” for catastrophic cybersecurity failures remains a gap in the international security architecture. No entity, public or private, has established a credible commitment to backstop systemic failures. Ironically, the relative robustness demonstrated during the pandemic shock may have reduced the sense of urgency to grapple with this problem.

The incentives to develop such backstops will likely remain limited until after a genuinely catastrophic failure materializes, presenting a classic collective action problem for the international community. This problem mirrors historical patterns in financial regulation, where central banking functions and deposit insurance schemes typically emerged only after devastating financial crises demonstrated their necessity. Despite mounting systemic risks, the relevant institutional architecture remains underdeveloped in the cybersecurity domain.

Addressing this gap may require innovative institutional arrangements that blend public and private capacities, perhaps through mutual insurance schemes, sovereign cyber bonds, or public-private partnerships designed to provide response capacity for catastrophic events that exceed any single organization’s resources.

From Confidentiality to Integrity

Workshop participants across regions anticipated a shift from data breaches to data manipulation as the predominant security concern — a shift in concern from “confidentiality” to “integrity,” in the classic CIA triad. This prediction has been partially realized through the proliferation of adversarial machine learning techniques, subtle training data poisoning, and disinformation campaigns. However, the speed of this transition has been uneven across sectors. Financial services and critical infrastructure have invested heavily in data integrity verification systems, while consumer-oriented platforms continue to prioritize breach prevention over manipulation detection.

A notable workshop finding that has aged less well concerned the projected relevance of “cyber norms” for international governance. Participants expressed skepticism about vague norm-setting exercises, perhaps due to their voluntary adherence. Yet few anticipated that similarly voluntary but technically-grounded frameworks — such as NIST’s AI Risk Management Framework — would supersede normative frameworks as the primary governance mechanism. The intensifying competition around technical standards-setting — from IEEE’s work on autonomous systems to NIST’s AI Risk Management Framework — has largely displaced diplomatic norm-building efforts as the central arena for contestation over digital governance.

CISOs: From Broad Remit to Distributed Responsibilities?

Workshop discussions anticipated that chief information security officers (CISOs) would need to engage with policy and geopolitics to a much greater degree than they had previously. This prediction has materialized, with CISOs increasingly functioning as translators between technical security operations and strategic business decisions. However, this transformation has revealed a fundamental tension: as the CISO remit expands to encompass more domains, the position risks becoming impossible to execute effectively.

This evolution suggests a potential organizational innovation: narrowing the CISO’s remit while elevating security thinking as an organizational capability. Just as design thinking has become an organizational competence rather than solely the province of design departments, security thinking may need to be integrated across functional areas, rather than concentrated in a single executive role. This distributed approach to security governance presents challenges for accountability, but may better reflect the cross-functional nature of contemporary security challenges.

The broadening of the CISO portfolio reflects the expanding surface area of risk in digitally transformed organizations. CISOs now regularly engage with a range of issues — including geopolitical tensions affecting digital supply chains, regulatory compliance across multiple jurisdictions, merger and acquisition due diligence, and brand reputation management — all while maintaining technical depth sufficient to address evolving threat vectors.

This expanded scope creates unrealistic expectations for any single executive, suggesting that organizations may need to reconceptualize security leadership. Rather than continually expanding the CISO’s responsibilities, organizations might benefit from a more distributed model that embeds security expertise within different functional areas while maintaining central coordination. This change would limit the remit of CISOs and, in so doing, recognize that effective security governance in complex environments requires both specialized expertise and broad organizational integration.

What Now?

What can future scenario developers learn from our successes and failures? Our analysis reveals three fundamental insights that may guide further strategic foresight in cybersecurity and its governance.

First, institutions shape outcomes at least as much as do technologies themselves. The “how” of development — whether through open-source communities, public-private partnerships, or other hybrid forms — determines capabilities, access patterns, and governance structures as much as or more than technical specifications alone. Google’s and Microsoft’s quantum computing initiatives do not operate in isolation from state interests; they operate within an institutional ecology that shapes research priorities, deployment decisions, and security practices. These institutional arrangements are not merely implementation details but constitutive elements of technological trajectories. In 2025, we appear to be heading toward an era where state interests matter more than they have for the past decade(s).

Second, technological governance rarely presents binary choices between state and market control. The most consequential developments typically emerge through hybrid forms that combine elements from multiple models. Open-source AI development exemplifies this dynamic, with governance mechanisms that incorporate elements of community standards, corporate oversight, and state regulation in varying proportions. Understanding these hybrid forms requires moving beyond simplistic dichotomies toward more nuanced institutional analysis in specific cases.

Third, psychological and cultural dimensions of human-technology interaction shape adoption patterns and security outcomes. Our scenarios underestimated the persistence of human judgment in security-critical contexts, and the resilience of cultural assumptions about what constitutes authentic versus artificial outputs. For example, despite advances in deepfakes, most people still rely on perceived authenticity signals when evaluating content, and organizations continue to value human verification even as technical capabilities for automated detection improve. Purely algorithmic approaches fall short without human-in-the-loop components, not necessarily because of technical limitations, but because humans instinctively seek verifiable sources of authority and trust relationships that AI systems struggle to replicate. For example, the widespread adoption of reinforcement learning from human feedback (RLHF) acknowledges that AI systems need human guidance on values and preferences that are difficult to encode algorithmically. The next several years are almost certain to test how robust those beliefs are, with the advent of human-level or greater intelligence in multiple domains, and possibly what will be broadly recognized as artificial general intelligence (AGI).

With these insights, we offer three tensions that we believe are likely to shape cybersecurity governance through 2030:

1. Sovereignty versus interoperability: How will competing visions of digital sovereignty balance against the need for interoperable systems? The localization of data and computational infrastructure creates security trade-offs between resilience through fragmentation and through integration. The notion of a global IT stack appears dead for now, which implies both a less uniform attack surface and a greater fragmentation of defense capabilities and resources.

2. Transparency versus capability: How much technical performance are organizations willing to sacrifice for interpretability and verifiability? This tension manifests across AI, quantum, and security systems, where black-box algorithms offer performance advantages but undermine accountability. Organizations increasingly face concrete trade-offs between using high-performing but opaque systems versus more transparent but potentially less capable alternatives, particularly in security-critical functions.

3. Human judgment versus automation: What decision boundaries will emerge between algorithmic and human authority? The optimal allocation of agency between human and machine systems remains contested across domains, from content moderation to critical infrastructure protection, from Hollywood to health care. RLHF presents a dominant paradigm for negotiating this boundary today. But the debate about the role of humans vs. machines will likely recur everywhere in the second half of the decade, and right now there is almost no consensus on how to manage it.

Scenarios do not — and cannot — precisely predict the future. However, they serve to surface strategic decisions that we can take in the present to better prepare for multiple possible futures. Looking back at our 2020 visions of 2025, we see that some of the trends that underpinned our scenarios did indeed shape the future. Just as importantly, the discussions around those scenarios highlighted underlying tensions (like those outlined above) that are still playing out five years later. The value of this “postmortem” analysis will be in translating those findings into guidance for the path ahead, particularly for those decision-makers who are now looking ahead to 2030 and beyond.

About the Authors

Nick MerrillNick Merrill directs the Daylight Security Research Lab at the UC Berkeley Center for Long-Term Cybersecurity. His work blends methods from design to data science to understand how corporate and state power tangle in technical infrastructures like the internet, and how that tangling circumscribes lives for people to live. Threat identification techniques developed by the Daylight Lab are used by the U.S. Cybersecurity and Infrastructure Security Agency, Taiwan’s Ministry of Digital Affairs, and Meta. Nick has published over two dozen peer-reviewed articles in venues like CHI, CSCW, and Duke Law Review. His research has been covered in news outlets worldwide, including CNN, CBS, Forbes. He serves as an advisor to the Christchurch Call, a consortium of national governments, technology companies, and academics working to combat terrorist and violent extremist content online.

Steve WeberSteven Weber is a retired professor at the UC Berkeley School of Information. He was the founder and faculty director for the UC Berkeley Center for Long-Term Cybersecurity. Professor Weber served as a special consultant to the president of the European Bank for Reconstruction and Development, and has held academic fellowships with the Council on Foreign Relations and the Center for Advanced Study in the Behavioral Sciences. He is a widely published author, whose books include The Success of Open Source and Bloc by Bloc: How to Build a Global Enterprise for the New Regional Order, which explains how economic geography is increasingly defined by technology rules and standards. One of the world’s most expert practitioners of scenario planning, Weber has worked with over a hundred companies and government organizations to develop this discipline as a strategy planning tool.