Phishing Is Getting Even Harder To Stop – And AI Agents Are The Next Target

AI-driven phishing is the top CISO concern for 2026. Attacks are more targeted, harder to detect, and now targeting the AI agents reading your emails.

Published on Mar 31, 2026
Joel Witts Written by Joel Witts
Phishing Is Getting Even Harder To Stop - And AI Agents Are The Next Target

AI-driven phishing and social engineering is the number one threat that CISOs are concerned about in 2026, according to this year’s Expert Insights’ CISO Confidence and Investment Trends report.

The rise of generative AI has meant an explosion in phishing campaigns, which are easier than ever for hacking gangs to spam out at scale.

Not only is phishing more widespread than ever, but it’s also more sophisticated and harder for email security controls to spot.

The average phishing campaign used to send about 10 identical emails. Today, that number has dropped to 1.8, Erich Kron, CISO Advisor at KnowBe4, told Expert Insights at RSAC 2026.

This is largely because Phishing-as-a-Service (PaaS) kits, which are pre-packaged DIY templates for sending phishing emails at scale, have gotten a lot more sophisticated. PaaS kits now often include GenAI capabilities for writing unique emails at scale, polymorphic payloads, and localized translations as standard features.

The barrier to entry for new phishing campaigns has collapsed, and the quality of attacks has gone through the roof.

“It means the attackers are more efficient,” Kron told Expert Insights. “And the numbers of attacks we’re seeing are going way up.”

The more polished an email looks, the less likely it is to be human-written. The old red flags, like typos and outlandish claims, are disappearing.

Molly McLain Sterling, Senior Director of Proofpoint’s Global Cybersecurity Strategist Team, relayed one CISO’s observation: “The way that I tell whether something [from my boss] is phishing now is whether it’s funny. My boss is not funny.”

Phishing Is Moving Out Of The Inbox

Email security has improved. Employees have been trained to be suspicious of unexpected messages in their inbox. So, attackers are going outside the corporate inbox and targeting more personal channels, like your personal email, WhatsApp or social media accounts.

At the same time, generative AI means phishing attacks can become even more realistic, with highly personalized audio and video deepfakes of your closest colleagues or relatives. With just a few seconds of audio, you can convincingly clone anyone’s voice.

Autonomous agents can automate this process, scraping the web for data to find the specific triggers that will make a specific person click.

McLain Sterling argues attackers are still in experimentation mode when it comes to fully autonomous, personalized AI-driven attacks. But AI is already making an impact. “It’s not a one-and-done phishing email anymore. It’s multi-channel, multi-stage attacks. It’s in your LinkedIn, it’s in your Teams, it’s in your Slack,” McLain Sterling said.

The encryption on these platforms compounds the problem. Services like WhatsApp are popular because of their privacy features, but those same security controls that protect users can help phishing actors to hide their campaigns from security teams.

“The threat actors know that most of these platforms are end-to-end encrypted. So they’ll start off in an email and then pivot over to one of these other places,” Kron explained. “And then [IT security teams] can’t see what’s going on.”

A new category of security companies known as ‘Digital Executive Protection’ has emerged to address these risks, focusing specifically on securing the personal accounts of top executives.

“I recognized several years ago that there were significant gaps in corporate cybersecurity protection that left business leaders and their families vulnerable to attacks in their personal lives, which, in turn, often led back to the enterprise,” Dr. Chris Pierson, founder of one such Digital Executive Protection company Blackcloak, told Expert Insights.

AI Agents Are The Next Phishing Target

As people begin delegating tasks to AI agents, from inbox management to booking travel to processing invoices, those agents become new targets for social engineering. And they introduce risks that look a lot like traditional phishing, but at machine speed and scale.

“Humans and agents can fall for the same things,” McLain Sterling said. “They can make mistakes in the same way. They can both execute code that they shouldn’t. They can both leak data. They can both be malicious. They can both be manipulated.”

She described a scenario that bridges the gap between traditional phishing and agent exploitation. A phishing email arrives that is too long to read, so the recipient runs it through their Copilot to get a summary. But hidden in the email, in white text that is invisible to the human eye, are instructions that the AI reads and executes.

“With prompt injection, you can get something in a phishing email, it’s too long, you run it through your Copilot to summarize, and in white text that’s not visible it says go execute these things,” McLain Sterling said. “Those are things we haven’t necessarily seen in the wild, but we know they could very easily happen. And we’re protecting against those.”

The traditional answer to phishing has been training. Teach people to spot suspicious emails, run simulations, build a security culture. But several leaders at RSAC questioned whether that model can keep pace.

McLain Sterling, who came to Proofpoint from the security awareness industry, was candid. “I really do think you have to put more of the burden on the technology than on the person. The security awareness industry is hanging on a little tightly to the idea that we’re going to train that person,” she said.

“What protecting agents requires is actual protections. I’m going to stop it. I’m going to check it. I’m going to do continuous reassessments.”

Phishing has always exploited the gap between trust and verification. AI is making that gap wider, faster, and harder to see. And the next wave of attacks will not just target the humans reading the emails. They will target the agents reading them on their behalf.