Blog Post / December 2025

From Automation to Autonomy: The Next Leap in AI-Enabled Cybercrimes

Image representing digital technology, internet network connection, futuristic innovation

Over the past year, the Center for Long-Term Cybersecurity has supported an initiative entitled “AI-Enabled Cybercrime: Exploring Risks, Building Awareness, and Guiding Policy Responses,” led by Gil Baram, a Non-Resident Research Fellow. This effort is designed to study how AI affects daily life, human security, and the evolving cybercrime ecosystem. As part of the project, Baram — together with Derek Manky, leader of FortiGuard Labs’ Global Threat Intelligence Team at Fortinet, and Helena Huang, Associate Research Fellow with the Executive Deputy Chairman’s Office at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore — led a tabletop exercise (TTX) in Singapore that brought together government agencies, cybersecurity practitioners, private-sector leaders, and academic partners to examine how AI is reshaping cybercrime. (Visit this page to read about a TTX held in Berkeley in December 2024.)

Held in October 2025, the exercise drew participants from critical infrastructure, industry, and government to stress-test real-world scenarios involving AI-generated malware, deepfake-enabled fraud, and accelerated attack chains. Participants explored both private-sector and national-level responses to AI-enabled attacks, from initial breach investigation to large-scale disruptions of energy, transport, and water systems. By design, the TTX produced not only observations on attacker capability and defender readiness, but also data for ongoing academic research at CLTC, helping inform forthcoming publications and industry guidance on AI-enabled cybercrime.

In the blog post below, Baram, Manky, and Huang share some of the key takeaways from the tabletop exercises in Singapore.


Something fundamental is shifting beneath the cybersecurity landscape. AI hasn’t created new motives for cybercriminals: money, leverage, and access still drive the ecosystem, but it has dramatically increased the speed, scale, and sophistication with which those motives are pursued. That was the recurring message during the tabletop exercise we led in Singapore in October 2025, one that was repeated among government, industry, and academic participants.

The main message for senior leaders was clear: AI amplifies known vulnerabilities, accelerates existing threats, and reveals every flaw in processes, communication, and governance. In a high-trust, digital-first society like Singapore, those flaws are even more significant.

The New Cybercrime Marketplace: Faster, Cheaper, Smarter

The conversation started with a blunt assessment. Cybercrime today resembles a seasoned underground industry rather than random hacking. Initial access brokers, ransomware negotiators, deepfake vendors, and malware developers don’t just coexist but they work together. AI drives this change.

It lowers technical barriers, cuts operating costs, and allows even low-skilled actors to run operations that previously required expert capability. We are already seeing signs of this transition, with AI-generated malware samples and deepfake-assisted financial fraud appearing in real cases, even if full-scale deployment hasn’t yet materialized.

Human Weakness: Still the Primary Attack Surface

Participants kept returning to one point: humans remain the critical failure point. AI simply makes it easier to prey on trust, fear, urgency, and authority. Executives should also be aware of a difficult emerging reality: some scam operators may themselves be victims of trafficking or coercion. 

This blurs traditional distinctions between offender and victim and creates new complexities for corporate compliance, HR, and legal teams that handle fraud incidents.

AI simply makes it easier to prey on trust, fear, urgency and authority.

Why Governance Matters More Than Tools

When participants worked through the first corporate breach scenario, the most interesting tension was organizational, not technical. Before any logs were examined or malware isolated, the teams debated roles, decision rights, and reporting lines. In a society built on clarity and structure, ambiguity in crisis slows response.

For leadership, this was a powerful reminder: you cannot improvise a governance model in the middle of an incident. Well-defined playbooks, secure communication channels, and pre-agreed escalation paths matter as much as SIEM dashboards or forensic tools.

AI tools did play a role, but a narrow one. Teams used AI to triage massive log volumes and surface anomalies, but they refused to rely on AI for investigative conclusions or attribution. Human accountability remained non-negotiable.

When the Lights Go Out: AI Escalation in Critical Infrastructure

The exercise escalated when Singapore’s power, water, and transport systems all detected similar AI-generated malware. Participants immediately treated this as a campaign, not an isolated breach. AI’s ability to mutate code and avoid detection means one successful attack almost certainly implies more are coming.

National-level responses focused on stabilizing essential services, coordinating across sectors, and maintaining public confidence. The communications challenge was particularly delicate: enough transparency to prevent rumors, but no speculation that could trigger panic or diplomatic missteps.

For decision-makers, the message was unmistakable. When AI is used to strike critical infrastructure, the stakes extend beyond cybersecurity: they become operational, political and societal.

Attribution: A Strategic Choice, Not a Technical Step

Attribution sparked a nuanced debate. On the corporate side, attribution beyond technical indicators was often secondary to containment. On the government side, participants stressed the need for measured, evidence-based attribution, mindful of false flags and regional sensitivities.

Leaders were encouraged to treat attribution as a strategic lever. It can illuminate attacker intent, shape future defenses, and inform diplomatic posture, but if mishandled, it can inflame tensions or mislead the public. In an era of AI-generated deception, patience and precision matter more than ever.

Deepfakes, Disinformation, and the New Front of Social Risk

The workshop repeatedly revisited deepfakes, not only as methods of fraud but also as tools for disinformation. Participants envisioned scenarios where attackers impersonate executives, regulators, or security providers to steal money or confidential information. They also examined the impact of foreign news outlets spreading false claims about national networks.

The recommended strategy combined technical detection tools with disciplined communication. Leaders should discuss deepfake risks in general terms, outline practical verification steps, and avoid tying warnings too closely to active incidents, which could spark panic or copycat behavior.

Collaboration isn’t just a soft value. It’s a fundamental security measure in an AI-accelerated world.

Stepping back, the workshop painted a picture of AI-enabled cybercrime in transition. Experimentation is giving way to commercialization, and commercialization is rapidly becoming operationalization. Attackers currently move faster because they face fewer constraints. Defenders must navigate regulations, evidence preservation, procurement cycles, and public accountability. But AI is available to both sides. The challenge for leaders is to deploy it in ways that enhance visibility, accelerate triage, and strengthen resilience without compromising trust, legal defensibility, or strategic stability.

The final reflection was straightforward but meaningful: technology alone won’t bridge the gap. Preparedness must be practiced, trust must be built, and deterrence must adapt. In a highly connected environment like Singapore, collaboration isn’t just a soft value. It’s a fundamental security measure in an AI-accelerated world.

About the Authors

Dr. Gil Baram is a Senior Lecturer (US Associate Professor) at the Political Studies Department, Bar Ilan University and a Non-Resident Research Scholar at the University of California, Berkeley Center for Long-Term Cybersecurity (CLTC). Previously, she was a Fulbright Cybersecurity postdoctoral fellow at Stanford University’s Center for International Security and Cooperation (CISAC). Dr. Baram’s research explores, among other areas, AI-driven cyber threats, the impact of technology on national security, the role of Intelligence agencies in cyber operations, cyber threats to space systems, cyber diplomacy and norms development, and data-based approaches to cyber conflict research.

Helena Huang is an Associate Research Fellow with the Executive Deputy Chairman’s Office at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Prior to joining RSIS, she worked at the Cyber Security Agency of Singapore (CSA). She holds a Master of Science in International Relations from RSIS and a Bachelor of Arts (Honours) in English from NTU. Her research straddles both digital and cyber issues, covering topics such as how the use of digital technologies impact states and societies, digital rights, cybercrime, as well as business and human rights.

Derek Manky leads FortiGuard Labs’ Global Threat Intelligence Team at Fortinet, bringing over 20 years of cyber security experience. He has established frameworks in the security industry including responsible vulnerability disclosure, which has exercised the responsible reporting of over 1000 zero-day vulnerabilities. Manky has been with the Cyber Threat Alliance since it was founded in May 2014. For more than 15 years he has been highly engaged with collaborative industry efforts and public-private partnerships including the Cyber Threat Alliance, FIRST.org, NATO NICP, MITRE CTID, INTERPOL Project Gateway, and the World Economic Forum Partnership Against Cybercrime (PAC), and Cybercrime Atlas. His vision is applied to help shape the future of proactive cyber security, with the ultimate goal to make a positive impact towards the global war on cybercrime. In addition to the Cyber Threat Alliance board, Derek sits on the MITRE Engenuity Center for Threat Informed Defense Advisory Council and World Economic Forum Cybercrime Atlas Executive Committee.