Email Security

A New Era Of Phishing Is Here: A Deep Dive Into GenAI-Powered Deepfakes

On the latest episode of the Expert Insights Podcast, Eyal Benishti, CEO of IRONSCALES, explains how we should be rethinking our phishing strategies in a new era of GenAI.

Eyal Benishti Cover

Phishing is entering a new, more dangerous evolution with the rise of AI-powered deepfake technology, IRONSCALES CEO Eyal Benishti tells Expert Insights on the latest episode of the Expert Insights Podcast.

Phishing today is most closely associated with the email channel. Phishing campaigns typically start with a malicious email containing a harmful link or attachment. More targeted campaigns, like business email compromise (BEC), involve the impersonation of high-level executives or suppliers. But the rise of GenAI is leading to new and different phishing capabilities that “go way beyond a convincing phishing email.”

“Phishing 3.0 is going to be something very, very different. It’s a pivotal moment. We’ve got to have a different approach to how we think about phishing and social engineering in the future. We have to zoom out and think about it, not just as how we can protect our inboxes, but how can we protect communication in general,” says Benishti.

Listen to our full conversation on the Expert Insights Podcast: 

Understanding The Deepfake Threat

AI is being used by cybercriminals to craft more convincing emails, but the technology goes far beyond that. GenAI can take a short clip of someone speaking and create a perfect replica of the person: their face, their voice, their body language.

“This is allowing actors to launch multi-stage, multi-channel, multi-modality attacks. It starts with an SMS, then an email. Then someone is leaving you a voicemail and jumps on a Teams call with you, and it looks exactly like [your CEO],” Benishti explains. These scenarios are not hypothetical. Last year, UK engineering firm Arup fell victim to a £20m phishing scam after an employee was tricked by a deepfake video call.

What’s more, the autonomous nature of AI means that attackers don’t even need to do the hard work of researching and targeting their potential victims. AI is autonomous; it can go out and run reconnaissance. It can find individuals and come up with ideas on how to attack them. It can create emails and clone voices. “These are the kinds of scenarios we’re starting to understand.”

IRONSCALES is a cloud email security provider that offers adaptive protection against phishing attacks via a combination of AI and human defenses. IRONSCALES recently released a report assessing organizational preparedness in the face of deepfake threats, following interviews with 200 IT professionals.

The research found that IT professionals are taking deepfake attacks very seriously and are proactive in implementing deepfake defenses. 94% of IT professionals have concerns about AI deepfakes, with 68% already implementing user training and 73% planning more investment in deepfake protection.

But the challenge remains in choosing the right strategy to defend against deepfake attacks. 60% of respondents said they were only somewhat confident or not confident in their ability to defend against such attacks.

Defending Against AI Deepfakes

Benishti argues there are three components to a successful deepfake defense strategy: being proactive, training users effectively, and having the right business processes in place to protect data.

Listen on the Expert Insights Podcast

“We need to fight fire with fire. We can’t use yesterday’s technology in order to protect against tomorrow’s threat. It’s like bringing a knife to a gunfight. If threat actors are using GenAI and taking steps to find potential targets and ideas on how to target organizations, we need to do the same.

“Companies will need to find proactive ways to identify how they can be phished and potentially fall victim to this type of attack. Using GenAI technologies that actors are using to continuously battle-test and assess their kind of defenses is going to be a big component of that.

“Second, awareness, training, culture. We have to invest time in educating and equipping our users, not just with knowledge but with tools. We need to augment end users and our security teams with the right tools to make the right decisions and make them quickly.

“And thirdly, ensure that processes are being rechecked and revisited. Companies should think about processes, such as how they are approving wire transfers—currently relying on the fact that if you call someone and he’s answering or if someone is calling you and it sounds like this person, the transaction is approved. Create a culture that understands the new risks.”

Getting Prepared for an Uncertain Future

The deepfake threat will get worse as GenAI becomes cheaper and more widely accessible. DeepSeek, the AI tool developed in China, is almost as powerful as ChatGPT’s OpenAI model despite being developed at a fraction of the cost. It’s also fully open-source. Threat researchers have already warned that the tool “has critical safety flaws” and in testing “failed to block a single harmful prompt.”

Benishti’s final advice is to start planning today and to be proactive: “Build a plan. Get educated about the new threat landscape because it’s evolving, it’s morphing, it’s happening very quickly. Understand that it’s not theoretical anymore.

“I think the key factor is we need to be much more proactive. We have gotten used to, in the cybersecurity ecosystem, building fences, buying all these tools to defend us, and we’re sitting and hoping that the walls we put in place will stop all the attacks coming in.

“It started with firewalls and then EDRs. It’s not the case anymore. In the future, if you just try and build a wall and sit and wait, the breach will come; something will happen. The only way to stop the new types of attacks is to have technologies that can be on the lookout for potential threats and take a proactive, continuous approach to ensure your defenses are where they need to be.”

Listen to the full episode on the Expert Insights Podcast


Further reading