Artificial intelligence is transforming the technology sector, and this shift is also impacting the cybercrime landscape. Cybercriminals are increasingly utilizing generative AI to enhance their strategies, techniques, and procedures, enabling them to execute faster, more sophisticated, and stealthy attacks.
The misuse of generative AI, much like its legitimate applications, has not primarily led to new types of cybercrimes. Instead, it has made existing crimes more efficient, lowering barriers to entry and allowing criminals to offload repetitive tasks, enabling them to focus on more complex aspects of their operations.
“AI does not inherently create new forms of cybercrime. Rather, it accelerates and scales familiar crimes while introducing new threat vectors,” explains Dr. Peter Garraghan, CEO/CTO of AI security testing firm Mindgard and a professor at Lancaster University in the UK. “If legitimate users can leverage AI to streamline their tasks, identify complex patterns, and reduce costs, why wouldn't criminals adopt the same approach?”
The emergence of agentic AI is beginning to alter this dynamic, with AI tools not just assisting attackers but actively automating their operations.
“The most notable change over the past year is AI's transformation from a basic 'helper' to becoming an autonomous partner for attackers, capable of executing entire attack chains,” states Crystal Morin, a senior cybersecurity strategist at Sysdig, a cloud-native security and visibility provider.
Below are various ways in which cybercriminals are currently leveraging generative AI to exploit enterprise systems.
Taking phishing to the next level
Generative AI facilitates the creation of highly convincing phishing emails, significantly increasing the chances that targets will divulge sensitive information or download malware.
Gone are the days of generic, poorly crafted, and error-laden emails. Cybercriminals can now utilize AI to quickly generate sophisticated, personalized emails that appear more legitimate to specific individuals.
These AI tools also enhance phishing campaigns by aggregating diverse data sources, including information gathered from social media.
“AI can quickly analyze which types of emails are being opened or ignored, allowing it to adjust its strategy to improve the success rate of phishing attempts,” Garraghan notes.
Facilitating malware development
AI is also being used to create more sophisticated or less labor-intensive malware.
For instance, cybercriminals are employing generative AI to craft malicious HTML documents. The XWorm attack, which begins with HTML smuggling, features malicious code that downloads and executes malware and shows signs of AI-assisted development.
According to HP Wolf Security’s 2025 Threat Insights Report, the loader’s detailed line-by-line description strongly suggests it was produced using generative AI.
Additionally, the design of the HTML webpage delivering XWorm closely resembles output from ChatGPT 4o, showcasing how AI can be used for malicious intents.
Furthermore, the ransomware group FunkSec, linked to Algeria and known for its double-extortion tactics, has started utilizing AI technologies, as reported by Check Point Research.
“FunkSec operators appear to leverage AI-assisted malware development, enabling even less experienced actors to quickly create and refine advanced tools,” Check Point researchers mentioned in a blog post.
Accelerating vulnerability hunting and exploits
Generative AI simplifies the process of analyzing systems for vulnerabilities and developing exploits.
“Instead of a black hat hacker manually probing a system's perimeter, an AI agent can perform this task automatically,” Garraghan explains.
Generative AI is believed to have contributed to a 62% reduction in the time it takes for attackers to exploit a vulnerability, shrinking the window from 47 days to just 18 days, according to a study conducted last year by ReliaQuest.
“This significant decrease strongly suggests that a technological advancement likely generative AI is allowing threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest reported.
Cybercriminals are using generative AI in conjunction with penetration testing tools to write scripts for tasks like network scanning, privilege escalation, and payload customization. AI is also likely assisting them in analyzing scan results and identifying optimal exploits, enabling quicker identification of weaknesses in targeted systems.
“These advancements speed up many phases in the kill chain, particularly the initial access,” ReliaQuest concluded.
Cyber resilience firm Cybermindr reported a different finding, revealing that the average time to exploit a vulnerability had dropped to five days in 2025. “AI-driven reconnaissance, automated attack scripts, and underground exploit marketplaces have accelerated the weaponization of vulnerabilities,” they noted.
For a deeper exploration of how generative AI tools are reshaping the cyber threat landscape by democratizing vulnerability hunting, CSO’s Lucian Constantin provides insights.
Launching AI-orchestrated espionage
In September 2025, Anthropic made headlines by announcing the disruption of a sophisticated AI-driven cyber espionage campaign.
The attackers had exploited Claude Code to automate about 80% of their campaign activities, targeting roughly 30 significant tech firms, financial institutions, and government agencies.
In a limited number of cases, the attacks were successful. Anthropic noted that an unnamed “Chinese state-sponsored group” was likely behind the campaign, which utilized jailbreaking tools to enable prohibited functionalities.
Last year, researchers at Carnegie Mellon’s CyLab Security & Privacy Institute, in collaboration with Anthropic, demonstrated that large language models like GPT-4o could autonomously plan and execute complex cyberattacks on enterprise-level networks without any human involvement.
“The study revealed that an LLM, when equipped with high-level planning abilities and supported by specialized agent frameworks, can simulate network intrusions and closely replicate real-world breaches,” explained a CyLab spokesperson.
Escalating threats with alternative platforms
Cybercriminals are now developing their own large language models (LLMs) like WormGPT, FraudGPT, DarkBERT, and others, which lack the guardrails that restrict the misuse of mainstream generative AI platforms.
These alternative platforms are often used for applications such as phishing and malware creation.
Moreover, mainstream LLMs can be customized for targeted purposes. Security researcher Chris Kubecka revealed in late 2024 how her custom version of ChatGPT, named Zero Day GPT, enabled her to identify over 20 zero-day vulnerabilities within a few months.
Stealing resources via LLMjacking
Threat actors are also stealing cloud credentials to hijack expensive LLM resources, either for personal gain or for resale, through a technique known as LLMjacking.
“Beyond theft of services, attackers are probing newer LLM models to find those without the safeguards of more established platforms, effectively utilizing them as unrestricted sandboxes to generate malicious code or bypass regional sanctions,” notes Sysdig’s Morin.
Creating a Silk Road style marketplace for AI agents
In addition to AI agents executing individual attacks, security experts are beginning to observe instances where coordination itself is being automated.
“We’re witnessing early experiments where multiple specialized agents interact, some focused on reconnaissance, others on tooling, execution, or data movement, without a single agent needing the complete picture,” explains Lucie Cardiet, a cyberthreat research manager at Vectra AI.
A tangible example of this trend is Molt Road, which serves as a dark-web-style marketplace for AI agents, although it currently has few listings.
“Autonomous agents can create listings, sell access or capabilities, coordinate tasks, and complete transactions with minimal human oversight, effectively automating the economics of cybercrime,” Cardiet states.
“In the coming months, we can expect attackers to actively exploit this model, breaking the attack chain into specialized, cooperating agents to enhance the speed and scale of their attacks,” she adds.
Breaking in with authentication bypass
Generative AI tools can also be misused to circumvent security defenses like CAPTCHAs or biometric authentication.
“AI can defeat CAPTCHA systems and analyze voice biometrics to compromise authentication,” according to cybersecurity vendor Dispersive. “This capability highlights the need for organizations to adopt more advanced, layered security measures.”
Leveraging deepfakes for social engineering
AI-generated deepfakes are increasingly being used to exploit communication channels that employees tend to trust more, such as voice and video, instead of relying solely on less convincing email attacks.
The issue is worsening due to the broader availability of AI technologies that can generate more realistic deepfakes, as noted by Alex Lisle, CTO of the deepfake detection platform Reality Defender.
“A recent incident involved a cybersecurity firm that relied on visual verification for credential resets,” Lisle explains. “Their process required a manager to join a Zoom call with IT to confirm an employee’s identity before a password reset.”
“Attackers are now using deepfakes to impersonate those managers during live video calls to authorize these resets,” he adds.
In one of the most notable cases, a finance employee at the design and engineering company Arup was deceived into approving a fraudulent HK$200 million ($25.6 million) transaction after participating in a videoconference where fraudsters used deepfake technology to impersonate the company’s UK-based CFO.
Impersonating brands in malicious ad campaigns
Cybercriminals have started employing generative AI tools to run brand impersonation campaigns via ads and content platforms, moving away from traditional phishing or malware tactics.
“Attackers now use generative AI to mass-produce realistic ad copy, creative content, and fake support pages, distributing them across search ads, social ads, and AI-generated content, targeting high-intent queries like ‘brand login’ or ‘brand support,’” explains Shlomi Beer, co-founder and CEO of ImpersonAlly, a security startup focused on protecting the online advertising ecosystem.
This tactic has been observed in ongoing Google Ad account fraud, in attempts to impersonate the Cursor AI coding assistant firm, and in a fake Shopify e-commerce platform customer support scam, among others.
Abusing OpenClaw
Attackers have also begun targeting viral personal AI agents like OpenClaw.
OpenClaw provides an open-source AI agent framework. A combination of supply chain attacks on its skill marketplace and misconfigurations has created potential exploits and avenues for malware distribution, as previously reported.
“Cybercriminals can exploit these virtual assistants to steal private keys from cryptocurrency wallets and execute code on victims’ devices,” states Edward Wu, CEO and founder of Dropzone AI. “We anticipate that 2026 will see security teams striving to prevent unauthorized use of personal AI agents.”
Poisoning model memories
To provide both short-term and long-term context, AI agents are increasingly relying on persistent memory, which opens the door for exploits involving the injection of malicious memories.
If an attacker manages to plant harmful or false information into an agent’s memory, that corrupted context can influence all future decisions made by the agent.
For instance, security researcher Johann Rehberger demonstrated how he could insert false memories into ChatGPT in September 2025.
“He used a malicious image with hidden instructions to embed fabricated data into the model’s long-term memory,” explains Siri Varma Vegiraju, security tech lead at Microsoft. “The alarming aspect is that once the memory is poisoned, it persists across sessions and continuously exfiltrates user data to a server controlled by the attacker.”
Hacking AI infrastructure
In the past year, attackers have shifted their focus from using generative AI to targeting the infrastructure that supports it.
This attack vector is exemplified by supply chain poisoning in Model Context Protocol servers, where compromised dependencies or altered code have introduced vulnerabilities into enterprise environments.
For example, a counterfeit “Postmark MCP Server” discovered in early 2025 silently BCC’d all processed emails, including internal documents, invoices, and credentials, to an attacker-controlled domain.
Many other malicious MCP servers have also been identified, designed to exfiltrate information without detection, according to Casey Bleeker, CEO of SurePath AI.
“We’re monitoring various categories of MCP-specific risks, including tool poisoning attacks, where adversaries inject malicious instructions into AI tool descriptions that execute when the agent invokes them; supply chain compromises, where a trusted MCP server or dependency is updated post-approval to act maliciously; and cross-tool data exfiltration, where compromised components in an agentic workflow silently siphon sensitive data through what appears to