Phishing-as-a-Service has transformed the social engineering ecosystem, giving low-skilled attackers access to sophisticated toolkits that include AI-powered email generation, polymorphic payloads, and localized translations. This has supercharged both the volume and realism of phishing attacks targeting businesses.
Phishing is also moving beyond the inbox and into platforms like Teams, Slack, WhatsApp, and even physical mail, targeting people where they least expect to be hit.
And as organizations begin delegating tasks to AI agents, from managing inboxes to booking travel to processing invoices, those agents are becoming new targets for social engineering. Agents can be manipulated, and they operate at a speed that makes human oversight difficult. KnowBe4’s Erich Kron argues that securing these agents requires a new approach: AI watching AI, with humans called in only when something looks wrong.
Expert Insights spoke to Erich Kron, CISO Advisor at KnowBe4, at RSAC 2026 to discuss how the phishing-as-a-service ecosystem is evolving, why phishing is spilling out of the inbox, and why the rise of AI agents means organizations will need to train machines, not just humans, to recognize social engineering.
Q. What are the big themes for KnowBe4 at RSA this year?
The biggest thing is, historically we’ve been known for security awareness training. But we’re really embracing the human risk management side of things. And honestly, we’ve been doing it for a long time. We just haven’treally broken it out.
We have an AI agent called AIDA (AI Defense Agents) and we’re doing some really cool stuff. We’ve turned AIDA into a collection of agents. We currently have at least seven agents running right now. And the idea is we’re trying to tackle all of those things that agents can do.
Because human risk is obviously more than just phishing. It’s accidental stuff. It’s people making mistakes. Misaddressing emails is another thing. I love the fact that we have some stuff in place that kind of pops up and says, are you really sure this is who you meant to send it to? We’ve all sent emails to the wrong people.
Q. With KnowBe4, the core has always been security awareness training around phishing and best practices. Is AI now becoming what people need training on?
Yeah, absolutely. And those are some of the courses that we have. There’s a lot of things about using AI safely. Part of it is enabling agents. People are just throwing agents out there. They’re giving them full access to their emails and everything. That could be catastrophic. So, learning what the dangers are with that is a big deal. As well as what you put into those LLMs. It may feel really cool to be able to say, here’s our quarterly earnings, make me a fancy report on it. But throwing that up there, if you’re not using a secure LLM, that ends up in the training data.
People don’t understand it. It’s such a new technology that’s moved so quickly. People just don’t understand the risks with it. Especially on the SMB side. Those smaller businesses, they’re the ones driving the adoption most quickly. They need those cost savings, or they can see the opportunity, and they want to take advantage.
I also see a lot of organizations really struggling to write AI-related policies. They’re like, we’re concerned about it, but in the same breath, we know that if we say no, people are going to do it anyway. So, they’re kind of running that razor’s edge of saying, okay, this is okay, but you need to do it this way. Which can be a challenge if it’s too complex for the people that are trying to roll it out. So, teaching people about those risks and why it matters and why they should pay attention to their policies is absolutely critical.
Q. Your research shows Phishing-as-a-Service toolkits are driving 90% of enterprise cyber-attacks. What does the Phishing-as-a-Service ecosystem look like today?
Phishing-as-a-Service is not new. It’s part of the whole as-a-Service thing. There’s Ransomware-as-a-Service, Malware-as-a-Service, all of that. But what it really allows is low-skilled people to be able to get into the cybercrime game. And it’s competitive, just like any other business. Cybercrime is a very competitive space.
So, they’re offering things such as GenAI tools in the Phishing-as-a-Service kits. Not just generation of the emails either, but other things as well. Translations and localizations. These are things that have always been available on the dark web through the marketplaces, but you had to pay for that separately. What it’s really meaning is the attackers are more efficient. And the numbers of attacks we’re seeing are going way up.
We’ve especially seen a big jump in polymorphic phishing. [That means that] every phishing email [is] a little bit different to get around the filters and get around reporting. The last time I talked to one of our guys, they were saying the average campaign had dropped from about 10 emails being the same to 1.8. Almost every single one is different.
So, you could have five phishing emails to the same organization and they’re all slightly different. And it’s not just the text or the subject. It can be the payload that’s attached, it gets tweaked a little bit. They’re doing whatever they can to get around the filters and around the endpoint protection. And this kind of stuff is actually being rolled into the Phishing-as-a-Service offerings. Because they’ve got to be competitive with other people too.
Q. We’ve been seeing headlines that phishing is moving out of the inbox. What are you seeing there?
We just released the Teams phish alert button, which is really cool. Because what’s happening is corporate entities have all of these different side channels for communications. I’ll be honest, we have Slack, there’s email, and sometimes we text each other. There are all of these different things going on. And that’s very normal within an organization. The problem is, if somebody starts questioning what’s going on, they don’t really have a way to report it like you do with email.
The threat actors also know that most of these platforms are end-to-end encrypted. So, they’ll start off in an email a lot of times and then they’ll pivot over to one of these other places. And then they’ll say, hey, let’s go here, it’s more secure. It’s all of these things that people think sound like a good idea. And then we can’t see what’s going on, what’s being asked of them, the actions and the things that we would normally trigger on in an email conversation. So, they’re doing it because it’s effective. They don’t do things because it doesn’t work. They do it and continue to do it because it’s really effective.
Teams is such an interesting one because it’s inherently trusted. You get a Teams message from someone; you’re not thinking the same way that you do when you get a suspicious email. And the other issue is, if somebody gets in and takes over somebody’s email account or their 365 accounts, you have access to their Teams also. What a great way to just start popping those messages out to people if you have a compromised account. There’s a level of trust there. People have not learned to distrust these platforms yet.
Q. How is AI being used in phishing today, and what’s the most concerning development?
Some of the big things they can do is obviously put together emails, but also create incredible customization in translations, and even more than just translation, localization. I always use the example that my dad was born in Bavaria. The dialect is very different from standard German. So, if I’m going to go after somebody in Bavaria, and I want them to believe that I’m one of their people, I want it to be in a Bavarian dialect. And I can do that very well with AI as opposed to reaching out to them in high German or a dialect that doesn’t match. It can really let people’s guard down.
But also, the voice cloning is a big deal. We did a talk in New York, and we cloned a CDO’s voice. It took me 16 seconds to clone his voice perfectly. 16 seconds of audio off a YouTube interview. And having those tools at their disposal is crazy. Imagine too, if you get an email that says, I need you to wire out $150,000, and you’re kind of like, alright. But then you get a phone call that says, I’m in the airport, got to get on a plane, I need you to wire that money, I just sent you the email, let me know when you’re done. How effective is that going to be versus just a send money email?
Q. Could AI-powered spear phishing at scale become a real problem, where agents autonomously research targets and generate personalized attacks?
It’s already happening. If you want to research somebody and get OSINT on them, it’s really easy to have AI go out and find things about that person. And if you combine that with some of the databases from the dark web from other breaches that are sold, you can build a sweet dossier on a person fairly easily. AI is really good at figuring out the otherwise hard-to-find stuff.
One of my buddies posted on LinkedIn, it was one of those “hey, where in the world am I” posts with four pictures. You couldn’t really see a whole lot. So, I threw it into a couple of LLMs and said, where is this? It came back and said, okay, we see these flags in the background, it’s this area, it looks like this venue. And sure enough, it nailed exactly where they were from three photographs. Being able to build that information about somebody, let’s say it’s an executive out on the road, they post up some pictures, you put that together and fire off a message going, hey, as you know, I’m in Norway. It makes it incredibly believable.
Q. We’re also hearing about how callback phishing and IT helpdesk scams are evolving. What are you seeing there?
My father-in-law, he had a stroke a couple of years ago and has some cognitive issues, not bad, but enough. And his wife happened to be in surgery at that time. So, he’s under a lot of stress. He got a message saying he missed jury duty and was going to jail if he didn’t pay a fine. He ended up calling the number, talking to the person, the person walked him through downloading stuff to his laptop.
What saved him was he couldn’t remember his password to his bank. The guy was getting mad at him because he couldn’t do that. And then he got a call saying his wife was out of surgery. People, especially some of the older generation, tend to be a lot more trusting about stuff like that.
And sometimes on the back end, when you call back, you’re not necessarily immediately talking to a person. You may be talking to an AI bot that will set the hook, and then once they’ve got you hooked, a person takes over. So, it’s very efficient for them. They don’t have to have people sitting around. It starts off with an AI conversation and follows up with handing off to a human scammer.
And obviously, Scattered Spider has been very successful with what they’ve been doing targeting IT helpdesks. The problem is you have people that excel in customer service because they care and they want to help. So, when somebody calls and they’re like, my phone got crushed, I have a new one, I can’t get set back up, I have a meeting in two hours. The people genuinely want to help them. So, they buy that story and go through the process getting things set up. And the actors say, oh gosh, my MFA isn’t working. And they say, we’ll go ahead and fix that for you. It’s crazy how much they can get done with a simple call like that.
Q. People are starting to delegate parts of their jobs to AI agents. How susceptible are those agents to phishing, and does there need to be a KnowBe4 for agents?
Actually, that is a goal we have. Focusing on protecting agents. Because the way we’re looking at it is that an agent is going to be an extension of the workforce. Everybody’s going to be using agents to do a lot of different things. It’s competitive. It’s much like word processing versus typewriters. We’re in that era where it’s just going to become an extension of everybody.
The problem is securing those, because you’re giving them access to a lot of things. We as humans can’t keep up with what the AI is doing. So, we’re going to need AIs to essentially watch those AIs, or intercept what they’re trying to do and have a security focus to say, yes, this is a reasonable thing, this makes sense for it to do this. And hopefully sometimes even call a human in the loop and say, wait a minute, it’s asking to do this, is this okay?
I see that being a lot of the future around agents, because I really think in like two years, everybody’s going to have agents attached to their inboxes. Someone’s going to be having them book travel. And every organization is throwing these agents out as quickly as they can to try to be ahead. But how much security research is going behind that? How much are they actually training them on what to watch out for? Is it just rapidly trying to get to market?
One of the things I love about us is we’ve been doing this for a long time. We have a lot of background, and we have a lot of data as to what happens when something is asking to do something suspicious. Especially on the social engineering side, whether it’s phishing, smishing, Teams, whatever. As we have more and more agents out there, we’re going to be able to spot that very well. And look at the context of what’s going on.
Q. Are you optimistic about AI and security, or concerned?
Cautiously optimistic. I tend to be more optimistic about it. But I’m also at the point where, if we don’t do something to address the security concerns around it, we’re going to cause ourselves more trouble than if we go into it a little bit slower. There’s a good book out there called The Coming Wave [by Michael Bhaskar and Mustafa Suleyman]. It’s talking about what’s happening with AI and it lays out some of the arms race between us and other nation-states that are competing. Do we pull out all the stops and just go with AI, or do we insert regulations that may end up slowing down our AI growth and putting us behind someone else? That’s the big debate.
And it’s coming up in Europe, whether or not Europe’s falling behind too much because of regulations. I actually love European regulations because they think so much about privacy. I see the good in that. But it could be detrimental when we’re in this race. Let’s not kid ourselves, this is a big deal. It’s going to fundamentally change the way the world works, the way business works. Just like when we had the internet come online. I don’tthink we’re doing enough to really weigh those risks.