AI-Generated Deepfakes and the Rising Risk of Cyber Attacks

Deepfakes are a growing threat. How can you spot them and prevent risks associated with them?

Last updated on Jun 24, 2025
Mirren McDade
Laura Iannini
Written by Mirren McDade Technical Review by Laura Iannini
AI-Generated Deepfakes and the Rising Risk of Cyber Attacks

AI Deepfake TL:DR

What are AI Deepfakes?

Highly realistic AI-generated synthetic media (videos, images, audio) created using deep learning and GANs, mimicking voices, faces, or actions.

How are they being used?

  • Benign: Entertainment (parodies, movie effects, de-aging actors), education (historical figures).
  • Malicious: Disinformation, non-consensual adult content, blackmail, fraud, fake endorsements.

What are the risks?

  • Risks: Cybersecurity threats (social engineering, CEO fraud), data breaches, reputational damage, legal/ethical issues (consent, digital identity).

  • Detection Challenges: Most people overestimate their ability to spot deepfakes; only 0.1% accurately identify them. Forensic tools (e.g., Phoneme-Viseme Mismatches) and AI detection are needed.

How can we stay protected?

  • Multi-factor authentication, biometric liveness checks, KYC processes
  • Security Awareness Training (SAT) with modern, behavior-based approaches.
  • AI-powered detection tools (e.g., Attestiv, Reality Defender, Intel’s FakeCatcher).

What does the future look like?

An arms race between deepfake creation and detection. Collaboration, regulation (e.g., TAKE IT DOWN Act), and watermarking (e.g., Google’s SynthID) are critical to combat evolving threats.

AI-generated deepfakes are highly realistic and convincing synthetic media, which typically come in the form of videos, images, or audio. The term “deepfake” alludes to their components, as it combines the deep learning concept with the idea of something fake. In short, they look realistic, but aren’t.

These pieces of hoax media known as deepfakes are created using AI. This is often done through techniques like deep learning and Generative Adversarial Networks (GANs), as these technologies have the ability to convincingly mimic a person’s voice, face, or mannerisms, making it appear as though they said or did something they never did.

Deepfake technology has both illicit and acceptable applications; however, they are increasingly being used as a cybersecurity threat, especially in social engineering attacks, CEO fraud, and disinformation campaigns. Their rising prominence is due to the growing accessibility of AI tools and the sophistication of generated content, which can bypass traditional security checks and deceive even experienced users.

For organizations, this elevates the risk of data breaches, financial fraud, and reputational damage, highlighting the urgent need for deepfake detection tools and robust verification processes, which should work in conjuncture with other security measures like multi-factor authentication and staff training focused on social engineering awareness.

But even with these safeguards in place, can cybersecurity today keep up with the accelerating sophistication of deepfake attacks?

Want more stories like this? Subscribe to our Decrypted newsletter.

This field is for validation purposes and should be left unchanged.

Deepfake Use Cases 

Deepfakes can be used for a wide range of applications ranging from benign to impactful.

The most innocent application of deepfake technology is for entertainment purposes. Examples that can be easily found online include parody videos of existing characters, reimagined scenes from TV or movies, or bringing historical figures to life for educational or comedic purposes. 

People often use deepfakes to swap their own or others faces on top film clips of celebrity interviews, with the goal of creating an obvious comedic bit of media that is not meant to appear at all convincing or real. When made glaringly obvious or properly labelled as fake, these uses are more ethical in nature and demonstrate the creative potential of this technology.

Deepfakes have been used in the production of movies and TV shows to digitally alter or swap out actors, which can expand creative possibilities. This might include seamlessly replacing stunt doubles with actors, de-aging performers to portray younger versions of themselves, or even digitally resurrecting deceased actors to continue their roles after death. And while this allows us to continue to enjoy beloved characters beyond the actor’s lifetime, which is exciting, there are concerns to consider with this capability. 

Firstly, the use of an actor’s likeness without proper consent can lead to intellectual property disputes and questions around digital identity rights. 

Secondly, there is the potential for misuse, such as creating portrayal with the actor’s likeness that they would not have consented to due to it showing them on a poor light or making light of/encouraging behaviors the actor would not wish to promote. 

So, while deepfakes offer exciting tools for storytelling, they also necessitate new safeguards around consent, transparency, and creative integrity. 

In a US courtroom this year, a deceased man’s family used an AI deepfake of him to deliver a victim impact statement at his own killer’s sentencing. Using AI to generate victim impact statements marks a new (and currently legal, at least in Arizona) tool for sharing information with the court outside the evidentiary phases. 

This case highlights the transformative potential and risks of deepfake technology in legal settings. While it may offer a meaningful way for a victim to “speak” posthumously, which serves as a reminder of the victim’s humanity, this also raises questions about consent, emotional influence, and the reliability of AI-generated evidence.

Unfortunately, a lot of the more practical applications for this technology have been malicious, with the motivations behind creating and spreading deepfakes often being political, financial, or even personal. Some examples of nefarious motivations include the following:

  • Spreading disinformation on social media, especially during important elections 
  • Producing adult content without the depicted person’s consent 
  • Blackmail and extortion, which many involve threatening to leak compromising AI-generated materials unless the victim pays or agrees to their demands 
  • Creating fake endorsements for products or investment scams, which is also nearly always done without the depicted person’s permission 
  • Impersonating someone to commit fraud or other criminal activities

Evolution Of Deepfakes – How Did We Get Here? 

“Believe half of what you see and nothing of what you hear” – Edgar Allan Poe

This ode to the necessity of scepticism from writer Edgar Allan Poe is perhaps more relevant than it has ever been before. The art of creating false media has seen a considerable evolution in recent years, and the days of doctoring physical photos manually are far behind us.

Digitally editing photos or videos using software such as Photoshop used to be relatively common, but this approach can take hours to complete. We’ve seen rapid improvement in the technical quality of deepfakes in recent years, as well as easier means to access and create them. Newer machine learning models have been developed to recreate people based on training data, with the most common type of model used for creating deepfakes being a Generative Adversarial Network (GAN).

This machine learning framework was invented in 2014, and it revolves around two underlying neural networks (called a generator and a discriminator) that essentially compete against each other. 

Around 2017, users on Reddit began posting deepfakes that swapped celebrities’ faces into other types of content. Some of these face-swapping posts were intended to just be humorous, but others had more sinister goals. 

Members of Reddit’s “r/deepfakes”, shared deepfakes they created, many of which involved celebrities’ faces swapped onto the bodies of actors in pornographic videos. That initiative sparked widespread attention and controversy, marking the inception of modern deepfake media and leading Reddit to ban the subreddit in early 2018 over concerns related to non-consensual pornographic content.

Deepfakes for voices began taking off around 2020. Today, anyone with a smartphone and internet connection can use any number of apps to create deepfake voice and face clones in minutes. As deepfake creation becomes quicker and easier, the urgency for advanced detection solutions, multi-factor authentication, and vigilant social verification to defend against these evolving AI-enabled threats is amplified.

Are AI Deepfakes Causing Data Breaches?

According to a 2025 study from iProov, people tend to vastly overestimate their ability to spot deepfakes when engaging with online content. 

  • Out of the 2,000 participants, just 0.1% of respondents were able to accurately identify if every single video / image shown was real or a deepfake. 
  • Whether they made correct guesses or not, over 60% of respondents reported feeling confident in their ability to tell the difference between real and AI-generated content. This confidence level trended higher in younger people (ages 18-34) than in older people. 
  • 30% of 55–64-year-olds and 39% of those aged 65+ had never even heard of deepfakes, highlighting a significant knowledge gap and increased susceptibility to this emerging threat by this age group. 
  • When encountering a suspected deepfake in the wild, only one in four participants reported seeking out other information sources to check for authenticity. 

Deepfakes are a convincing tool for impersonation, making it a popular choice for phishing and social engineering. 

  • In May of 2025, the FBI’s Internet Crime Complaint Center (IC3) released an advisory about AI-generated voice cloning being used to impersonate government officials, well-known public figures, or even personal relations. 
  • These types of scams often target vulnerable populations such as the elderly, who may not know much about deepfakes 

How To Stay Secure Against AI Deepfakes

Researchers from Berkeley and Stanford have identified Phoneme-Viseme Mismatches as one forensic technique for detecting if videos are AI-generated. The idea behind this is to compare spoken sounds (phonemes) with the corresponding mouth shapes for making those sounds (visemes) to check whether or not they match where they should.

Learning how to decipher deepfakes from real content is already challenging, and as GenAI tools advance, telling these apart is expected to become more difficult. Even trained professionals must rely on forensic tools and metadata analysis to accurately detect manipulation, especially as deepfakes become increasingly refined and sophisticated.

Some Tips from the FBI to spot AI-generated media include the following: 

  1. Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements. 
  2. Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical. 
  3. Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
  4. Do not send money, gift cards, cryptocurrency, or other assets to people you do not know or have met only online or over the phone. If someone you know (or an associate of someone you know) requests that you send money or cryptocurrency, independently confirm contact information prior to taking action. Also, critically evaluate the context and plausibility of the request. 

How Strong Identity Controls Can Secure Against AI Deepfakes

Accurate identity verification is a good way to block possible deepfake attacks. Something like Know Your Customer (KYC) and identity verification processes often incorporate deepfake detection technologies, making them a good source of added security. 

Some businesses in sensitive industries are required to verify the age or identity of online customers before granting them access or rendering services, and these highly regulated industries may use enhanced KYC processes to also prevent onboarding of synthetic identities and impersonators.

Even if AI is able to copy a person’s likeness or their voice, these things shouldn’t be the only way for a user to authenticate themselves. Other types of authentications include: 

  1. Something you are 
  2. Something you know 
  3. Something you have 

Biometrics details such as fingerprints or iris scans can be used to authenticate based on “something you are”, password-based security systems with passwords, usernames, pins etc are examples of “something you know”, and physical hardware tokens such as YubiKeys fall under “something you have”.

Another important component for comprehensive deepfake attack prevention is Security Awareness Training (SAT). Legacy SAT tools that don’t offer updated trainings and punish users for failure won’t be as effective in teaching end users to spot these attacks. Use of outdated methods can lead to user fatigue, resentment, and a lack of genuine learning. 

These traditional platforms run the risk of failing to adapt to the evolving nature of cyber threats, especially those involving social engineering or AI-generated attacks like deepfakes and advanced phishing. Being left behind on these matters results in glaring vulnerabilities within your team’s collective cyber resilience. 

More modern, behavior-based SAT platforms with positive reinforcement are better positioned to educate users due to their continuous, adaptive education model. Typically, these tools will deliver short, interactive modules tailored to user behavior and risk level, ensuring that users are not spending their valuable time slogging through unnecessary materials while reinforcing good security habits through positive feedback rather than penalties. 

By rewarding secure behavior and contextualizing lessons within real-world scenarios, they increase engagement and retention. This shift not only improves individual awareness but also strengthens the overall human layer of an organization’s cybersecurity defense.

How Deepfake Detection Works

Since most humans cannot reliably and consistently identify deepfakes, some have decided that the best course of action is to fight AI with AI. AI-powered deepfake detection tools work by looking for certain indicators within images, then assigning a score for how likely it is to be AI-generated. 

Liveness checking is a technique to determine if a user is a real live human, a recording, or a deepfake. It is designed to confirm that a biometric input like a face or voice is being captured from a real person in real-time rather than a spoofed, synthetic, or replayed source such as a photo, video, or deepfake. Deepfakes often struggle to mimic these real-time, spontaneous biometric cues, which is what makes this method so effective.

Biometric liveness checks can counteract both deepfakes and spoofing attempts on biometric sensors. (i.e. a silicone mold of a fingerprint instead of the real thing). This is most commonly implemented over video where an automated system will check for subtle signs, such as breathing or eye movements. Users may also be asked to perform a task on camera such as moving their head in a specific way or saying a random phrase.

Liveness checking is commonly used in fintech, healthcare, and government services to defend against impersonation fraud, and is increasingly critical as deepfake technology becomes more accessible.

Best Deepfake Detection Vendors

Some emerging vendors and tools have hit the market to tackle the deepfake detection problem. This is a non-exhaustive alphabetical list of some examples:

Other existing vendors have started to branch out into this area:

  • Intel has released FakeCatcher for real-time deepfake detection 
  • Hyperverge, a KYC platform, now offers deepfake detection with their product 
  • IRONSCALES released deepfake protection this year as part of their anti-phishing platform 
  • OpenAI has released their own deepfake detection tool. It can classify images with 98.8% accuracy, but it is limited to only picking up if an image was generated by OpenAI’s DALL-E 3 engine. 
  • Hook Security has started to offer deepfake awareness training on their SAT platform 

Thoughts And Opportunities – Where Do We Go From Here? 

Today, there is an arms race between GenAI models training to create more believable deepfakes and AI detection tools working to classify these accurately. Advances in GenAI, especially in models that handle video, audio, and image synthesis, have dramatically increased the realism of synthetic media, which has made it harder for both humans and traditional detection tools to distinguish fakes from genuine content.

Combating deepfakes in the long term will require strong collaboration between tech companies, research institutions, and governments. Without coordinated effort, responses tend to be fragmented or reactive. A united front allows for shared standards, real-time threat intelligence, and faster development of detection and prevention strategies.

The United States recently signed the TAKE IT DOWN Act into law with bipartisan support. This legislation “prohibits the nonconsensual online publication of intimate visual depictions of individuals, both authentic and computer-generated, and requires certain online platforms to promptly remove such depictions upon receiving notice of their existence.” 

In 2024, the FTC ran a competition called the FTC Voice Cloning Challenge. This initiative was created for researchers to combat the problem of AI voice cloning in fraud cases, and the winners were awarded $35k in prize money. 

The Coalition for Content Providence and Authenticity (C2PA) is a technical specification that aims to fight deepfakes and disinformation by tracing the origins of content online. This project is backed by several major tech companies.

Some companies behind AI technologies are looking for ways to watermark images, audio, and video created by their tools. Google has recently launched a tool called SynthID Detector, which can watermark and detect content generated by Google AI.

The wild-west nature of AI-generated content and the lack of regulation around it poses some critical issues, but this may not be the case forever. If these types of solutions become more widespread in the future, it could help give people the tools to fight back against fakes. If regulators and technology leaders choose to ignore these issues as the technology continues to improve, then we may hit a point of no return where AI becomes truly undetectable. 


Looking for a solution to help mitigate and risks associated with synthetic media manipulation? Read our article on the Top 8 Deepkfake Detection Solutions to find one suited to your needs. 


Written By Written By
Mirren McDade
Mirren McDade Senior Journalist & Content Writer

Mirren McDade is a senior writer and journalist at Expert Insights, spending each day researching, writing, editing and publishing content, covering a variety of topics and solutions, and interviewing industry experts. She is an experienced copywriter with a background in a range of industries, including cloud business technologies, cloud security, information security and cyber security, and has conducted interviews with several industry experts. Mirren holds a First Class Honors degree in English from Edinburgh Napier University.

Technical Review Technical Review
Laura Iannini
Laura Iannini Cybersecurity Analyst

Laura Iannini is a Cybersecurity Analyst at Expert Insights. With deep cybersecurity knowledge and strong research skills, she leads Expert Insights’ product testing team, conducting thorough tests of product features and in-depth industry analysis to ensure that Expert Insights’ product reviews are definitive and insightful. Laura also carries out wider analysis of vendor landscapes and industry trends to inform Expert Insights’ enterprise cybersecurity buyers’ guides, covering topics such as security awareness training, cloud backup and recovery, email security, and network monitoring. Prior to working at Expert Insights, Laura worked as a Senior Information Security Engineer at Constant Edge, where she tested cybersecurity solutions, carried out product demos, and provided high-quality ongoing technical support. She holds a Bachelor’s degree in Cybersecurity from the University of West Florida.