AI Not Yet Game Changer For Cybercriminals, Says Intel471

Intel471’s whitepaper investigates how attackers are using AI to carry out attacks.

Published on Oct 16, 2025
Mirren McDade Written by Mirren McDade
AI Not Yet Game Changer For Cybercriminals, Says Intel471

AI is not yet replacing traditional attack methods, but it is making them more efficient.

This is according to Intel471’s white paper, Precision Deception: Rise of AI-Powered Social Engineering, which explores how cybercriminals are increasingly leveraging artificial intelligence to enhance traditional social engineering tactics.

Most threat actors continue to rely on Phishing-as-a-Service (PhaaS) platforms and off-the-shelf kits, with AI tools primarily being used to draft and localize content more quickly. According to the report, AI has not yet been the game changer some predicted in this particular area.

Generative AI has certainly improved the quality of phishing and Business Email Compromise (BEC) lures by making them more convincing and reducing the likelihood of mistakes. However, the report finds that it is being utilized as a method of boosting efficiency, rather than a fundamental strategy.

Three Channels of Deception: Text, Voice, & Visual

Intel 471 maps AI-enhanced techniques across three communication channels, each one leveraging distinct trust cues:

  • Text (Phishing / BEC): These centre around language and context, with AI helping to craft more persuasive email copy, mimicking writing styles, and tailoring messages to regional audiences.
  • Voice (Vishing): These capitalize on real-time interaction and a tone of urgency, with deepfake voice synthesis and voice cloning tools used to enable attackers to impersonate executives or trusted contacts in phone-based social engineering.
  • Visual / Multimedia: These involve synthetic images and deepfake videos, using AI-generated face swaps, deepfakes, and manipulated videos to bolster fraud campaigns, making deceptive messages more compelling.

The white paper explains how each channel leverages unique human trust cues, highlighting how AI can enhance the effectiveness of social engineering campaigns across multiple modalities. 

As AI grows in prominence, cybercriminals will make use of its capabilities to improve productivity and reduce cost. Intel predict that we are likely to see an increase in the use of AI as a component of an attack. This will build on the 82.6% of phishing emails identified between September 2024 and February 2025 that were reported to have used AI, as shown in KnowBe4’s 2025 Phishing Threat Trends Report.

Key Takeaways for Defenders

At Black Hat 2025, Expert Insights spoke with Michael DeBolt, Chief Intelligence Officer at Intel471, who told us: “Everybody’s losing their mind [over AI] rightly so, but we just haven’t seen a lot of uptake on threat actors really leveraging AI to any large degree. I would say probably the biggest thing that we’ve seen or a trend I should say is social engineering.”

“They’re using AI to help them curate really believable lures within those emails. So that’s happening. It’s creating more efficiency gains for actors, for sure. Primarily on the social engineering side.”

According to Intel 471, organizations should remain on the look out for AI-enhanced attacks, particularly in phishing and multimedia deception. Widespread adoption of AI in day-to-day cybercrime effort will depend on, and will likely be proceeded by, a notable decrease in model hosting expenses and the rise of cutting-edge AI kits, similar to current popular PhaaS tools.

While AI is not yet transforming social engineering at scale, it serves as a force multiplier that can improve efficiency and message quality. Early awareness and threat intelligence can help defenders anticipate and mitigate AI-assisted campaigns.