News / January 2025

Beyond Phishing: Exploring the Rise of AI-enabled Cybercrime

By Derek Manky and Gil Baram

Over the last year, discussions about AI-enabled cybercrime have shifted from abstract speculation to concrete reality. During a recent tabletop exercise (TTX) with cybersecurity professionals, law enforcement, and industry experts, conducted at University of California, Berkeley, we explored emerging trends in AI-enabled cyberattacks as well as where the threat landscape might be headed in the near future. The cybercrime landscape continues to expand, with a rise in tech support scams, investment fraud, “pig butchering” schemes, and FOMO-driven scams targeting those eager to seize seemingly lucrative opportunities. These crimes are growing in both frequency and financial impact. Below, we will take a look at what AI-enabled cybercrime looks like today and what we might expect in the coming years.

Participants from the first AI-Enabled Table Top Workshop hosted at UC Berkeley in December 2024.

The Current Landscape: Lower Barriers, Higher Threats

A New Era of “Weaponized” AI

While automation has existed in cybercrime for years (think phishing kits, malware builders, or exploit frameworks), the widespread availability of Large Language Models (LLMs) has rapidly lowered barriers to entry. Criminals can now generate highly customized phishing emails, convincingly fake voices for social engineering attacks, and streamline reconnaissance. Though many people initially associate AI in cybercrime with chatbots writing malware, the immediate impact is primarily on the ‘left side’ of the cyber kill chain, especially the reconnaissance phase and some basic weaponization of AI. Attackers are using AI to target victims more precisely and create more tailored lures, thereby increasing their success rates.

Deepfakes and Social Engineering

Deepfake technology – once laborious and expensive – has become far more accessible. Criminals can clone voices with just an hour of YouTube footage and an $11 subscription, making phone-based scams much more convincing. Attackers have successfully impersonated CEOs to trick employees into buying gift cards or initiating wire transfers. As AI-powered editing tools become more common, impersonation attacks will continue to rise in sophistication.

Lower Barriers, Wider Participation

Participants in the TTX highlighted that AI is not necessarily creating new criminals but is instead enabling individuals already involved in other forms of crime to transition into cybercrime. Low-level criminal outfits, previously deterred by the technical skill required for cyberattacks, are now adopting AI-driven tactics. This shift allows individuals with limited familiarity with traditional hacking tools to create convincing phishing campaigns or craft malicious code with minimal effort. By lowering the technical barrier, AI “supercharges” the capabilities of existing criminals, making cybercrime more accessible and attractive due to its relatively lower risk and cost compared to traditional street-level offenses.

Key Attack Vectors and Trends in 2025 and Beyond

Hyper-Targeted Phishing

Today’s phishing is more localized, personalized, and persuasive than ever. Cybercriminals feed location- or organization-specific data into LLMs to craft emails that appear to come from a legitimate local bank or corporate contact. This personalization increases the likelihood of success and reflects a growing trend: criminals no longer rely on broad “Nigerian prince” messages. Instead, they are funneling AI-assisted reconnaissance and context generation into meticulously targeted campaigns. Moreover, by tailoring the language used in these communications – whether it be a recipient’s regional dialect or their native tongue – cybercriminals gain a powerful force multiplier. This linguistic localization not only broadens the global reach of phishing attacks but also expands the overall attack surface, since victims are more apt to trust communications that appear authentically local or culturally familiar.

AI Agents for Malware and Reconnaissance

Experts predict that “AI agents” – autonomous systems trained to perform particular tasks – will evolve quickly. These agents can open browsers, extract passwords, or scan entire networks, all without significant human oversight. In the near future, one attacker might manage dozens of AI-driven agents, each executing different steps in the kill chain faster and more efficiently than a team of human hackers ever could. Swarm intelligence – where AI agents collaborate autonomously – has not yet materialized. However, imagining AI agents within botnets actively discovering CVEs is a fascinating, albeit concerning, possibility.

The Rise of Deepfake-as-a-Service

As more providers offer “deepfake generation on demand,” voice and video impersonation will move into a turn-key business model—just as ransomware did. This new service model will democratize deepfakes further and will enable fraudsters to run more complex social engineering campaigns, like impersonating high-level executives or family members in real time to manipulate targets.

Insider Threat 2.0

One example that was mentioned is a scenario where attackers used AI-driven identity creation to apply for remote jobs at tech firms, passing standard background checks with fabricated personal histories and even deepfake Zoom interviews. Once inside, these “employees” gain privileged access to systems and data. Given the complexities of verifying global hires, especially for remote roles, companies will need more robust authentication measures to thwart such ploys.

Automated Vulnerability Scanning and Exploitation


Right now, we see AI primarily used in the reconnaissance and initial intrusion stages. However, participants warned that the next phase is advanced vulnerability discovery and exploitation. In a matter of hours, AI-enabled tools could scan large codebases, identifying both zero-day and n-day vulnerabilities. This is especially alarming since zero-day discovery poses the greatest immediate risk, while n-day vulnerabilities can be quickly adapted into diverse exploit scenarios. Equally concerning is the potential to auto-generate exploits for these detected vulnerabilities, regardless of whether they are zero-day or n-day. Defenders already struggle to keep pace with the proliferation of vulnerabilities; the introduction of AI will only intensify that challenge by accelerating both the discovery of new threats and the retooling of existing ones.

Mitigation and Looking Ahead

Security by Design

Many participants emphasized the need for organizations to adopt secure-by-design and secure-by-default principles. This involves integrating stronger authentication, continuous monitoring, and robust encryption from the outset rather than treating them as afterthoughts. While regulators and security bodies like CISA are pushing for stricter guidelines, achieving broad compliance is challenging. A potential policy lever could involve introducing mandatory reporting requirements per industry, with clearly defined timelines, to improve transparency and accountability. Additionally, early interventions, such as providing tools and guidance for local law enforcement to track incidents and report data to federal agencies like the FBI and CISA, could strengthen regional and national responses.

AI for Defense

The same AI technologies powering cybercrime can be harnessed for defensive purposes. Automated threat detection and response systems, behavioral analysis tools, and deepfake detection software are rapidly evolving. By leveraging AI to conduct real-time scanning for abnormal access attempts or suspicious communications, defenders can significantly raise the bar for attackers. This approach not only accelerates the kill chain—pinpointing and neutralizing threats before escalation—but also increases the cost of cybercrime for financially motivated actors, potentially deterring them. However, this also fuels a broader arms race, as both adversaries and defenders continue to outpace each other with increasingly sophisticated AI tools.

Public-Private Partnerships and Regional Interventions

As cybercriminals evolve, so must collaboration among tech companies, cybersecurity vendors, universities, and government agencies. Law enforcement agencies like the FBI are adapting to the AI-driven landscape, but stronger public-private partnerships are crucial. These partnerships can facilitate intelligence sharing, identification of emerging threats, and development of best practices. Furthermore, exploring issues like the role of crypto ATMs in enabling financial crimes and designing effective regional interventions can equip local and regional actors with the tools necessary to combat cybercrime more effectively.

Looking ahead, there’s good news: we’re not yet seeing truly sophisticated abuse of AI in cybercrime. Identifying zero-day vulnerabilities remains a long and complex process, though AI tools are beginning to augment these efforts. Each year seems to bring a record-breaking number of Common Vulnerabilities and Exposures (CVEs), and while we’re on pace for another record, the future remains uncertain. Fortunately, swarm intelligence—where AI agents collaborate autonomously—has not yet materialized. However, imagining AI agents within botnets actively discovering CVEs is a fascinating, albeit concerning, possibility.

Read the AI-Enabled Cybercrime Industry Partner Prospectus to learn about sponsorship opportunities for table-top exercises in 2025.