Easy access to large language models (LLMs) and various AI tools has made it easier for cybercriminals to execute effective cyber-attacks quickly and on a large scale. This alarming trend was highlighted in a recent threat intelligence report by Cloudflare.
The 2026 Cloudflare Threat Report, which draws on research and analysis from the company’s Cloudforce One threat research team, illustrates how AI has become a powerful asset for cybercriminals. By reducing the effort required to conduct cyber campaigns, AI is making these operations not only easier but also more impactful.
Cloudflare stated, “An actor who previously lacked the skills to craft a convincing phishing email or write custom malware can now leverage an LLM to generate them rapidly and at scale, significantly lowering the barrier to entry for highly effective operations.”
The report reveals that a diverse array of threat actors, including state-sponsored hacking groups, financially motivated cybercriminal gangs, and hacktivist collectives, have adopted LLMs and AI in their operations.
Malicious hackers are employing these tools in various ways, such as using LLMs to create more convincing phishing emails, particularly when they need to communicate in a language other than their native one. Additionally, attackers are utilizing AI to assist in writing malware and to conduct campaigns, effectively lowering the technical barriers for launching attacks. For instance, attackers are reported to use LLMs for real-time network mapping.
“Cloudforce One tracked a threat actor who leveraged AI to help identify the location of high-value data. This allowed the actor to compromise hundreds of corporate tenants, resulting in one of the most impactful supply chain attacks seen,” the researchers noted.
AI Deepfakes: The New Insider Threat
Corporate identities have become a prime target for cyber-attacks, with user accounts highly sought after by attackers who aim to access cloud infrastructures to conduct covert operations.
In some cases, merely using account identities is insufficient. Researchers caution that AI-generated deepfakes and fraudulent identities are being created to bypass hiring filters, allowing threat actors to infiltrate target organizations as employees. This tactic has been notably exploited by North Korean hackers.
“This infiltration turns the remote workforce into an attack vector, placing malicious insiders within the organization’s most trusted administrative and financial systems,” the report explained.
Cloudflare has warned that the rise of AI-based tools, which lower the barrier to entry for sophisticated technical campaigns, signals the “total industrialization of cyber threats.” Organizations must be prepared for the rapid evolution of cyber-attacks.
“Threat actors are constantly changing tactics, finding new vulnerabilities to exploit and ways to overwhelm their victims. To avoid being caught off guard, organizations must shift from a reactive posture to one fueled by real-time actionable intelligence,” emphasized Blake Darché, head of threat intelligence at Cloudforce One, Cloudflare.