Deepfake detection solutions are part of a growing field of technological solutions dedicated to identifying and preventing the spread of manipulated digital content. These solutions are designed to detect modifications and alterations in videos, images, and audio clips, which are usually generated using artificial intelligence. To achieve this, deepfake detection solutions typically use a combination of deep learning algorithms, image, video, and audio analysis tools, forensic analysis, and blockchain technology or digital watermarking—all of which help the solution to identify inconsistencies undetectable to the human eye.
While there are some positive applications of synthetic content, such as in the entertainment industry, deepfakes can also pose significant security risks. They can be used by threat actors to manipulate public opinion (for example, during elections or wartime), spread misinformation, and impersonate individuals in order to convince someone to do something they shouldn’t, such as wire money to a fraudulent account.
By utilizing deepfake detection software, organizations can guard themselves and their users against the potential threat posed by deepfakes. Deepfake detection tools help maintain the truth and authenticity of information, preserve the integrity of individuals and brands, and prevent users from falling victim to sophisticated, deepfake phishing attacks.
It’s important to note that deepfake detection is an emerging category of cybersecurity, with many products in beta or early stages of development. Because of this, the accuracy of these products is somewhat difficult to measure. That being said, in this article, we’ll explore the top deepfake detection solutions currently on the market. We’ll highlight the key use cases and features of each solution, including AI-powered multimedia analysis and suspicion alert mechanisms.
In 2018, Zemana’s child company Deepware started researching deepfake detection and generation. Today, Deepware provides a deepfake detection platform called Deepware Scanner, which aims to prevent the spread of synthetic media and disinformation. To achieve this, the platform detects AI-generated manipulations of human faces within videos uploaded to platforms such as YouTube, Facebook, and Twitter.
Deepware Scanner is available as a web platform, but can also be used via RESTful API key or in an on-premises environment via SDK. The tool is relatively easy to use: the user simply uploads the URL of the video they want to scan to the Deepware Scanner’s user-friendly web interface, where it will be scanned with the Deepware AI model. For the API and SDK versions, the user then has to send the report ID from their scan to the platform’s reporting endpoint in order to obtain the results of their scan. Note: Deepware Scanner can currently analyze videos of up to 10 minutes in length.
DuckDuckGoose is a provider of deepfake detection solutions designed to protect organizations against potential threats involving manipulated media, including images, videos, audio, and text. Their comprehensive suite of verification tools not only confirm whether the content is a deepfake, but also provides clear explanations on how the manipulations were identified, giving the user high levels of confidence in the accuracy of the deepfake detection.
DuckDuckGoose offers six deepfake detection tools. AI Voice Detector determines the authenticity of audio content, using AI to distinguish between genuine human voices and AI-generated voices in any language within 5 seconds. The tool encrypts all voice data during analysis. AI Text Detection analyzes text for content written by AI text generators, such as ChatGPT and Google Bard. DeepDetector identifies deepfake images and videos in real-time, using AI algorithms to detect deepfake techniques such as face swapping, lip-syncing, and audio manipulation. This tool provides detailed explanations of its analysis results, enabling users to take action against deepfake content with confidence. It also offers an API that enables users to integrate it with their existing identity verification tools.
DuckDuckGoose also offers a demo version of DeepDetector called Phocus. DeepfakeProof is a free browser extension that enables users to detect deepfake images online. It scans webpages in real-time and provides alerts when it detects AI-generated or deepfake images, helping to prevent the spread of misinformation. Finally, Deepfake Maker enables users to generate high-quality deepfake images and videos that they can use to test their deepfake detection systems.
DeepMind, Google’s research lab that focuses on AI and machine learning, has beta-launched SynthID, a watermarking tool designed for AI-generated content. SynthID allows for digital watermarks to be embedded into AI-created images, videos, and audio pieces, enabling an easy identification process. This watermark, although undetectable to human senses, helps AI content creators to assert content provenance and promote trust in artificial intelligence-generated material.
SynthID utilizes two deep learning models: one is responsible for the insertion of the watermark; the other for identifying it. When applied to images, the watermark is directly embedded into the pixels, remaining hidden to the naked eye, but readily recognizable by SynthID. Audio material uses a similar method, where the watermark is incorporated into the audio waveform. This invisible watermark is robust enough to withstand modifications without compromising the quality of the image or sound. A limited number of Vertex AI customers using the Imagen text-to-image models also have access to SynthID for secure identification of AI-generated visuals.
The tool’s application reaches AI-generated music as well, with the watermark added to Lyria’s AI music generation model content. SynthID ensures the watermark remains undetectable to the ear while preserving its detectability. Currently, SynthID is integrated with Veo and available to Vertex AI customers.
Intel has leveraged its Responsible AI initiative to launch FakeCatcher, a product designed to identify and flag fraudulent videos. Working through a web-based platform, FakeCatcher operates on a server using both Intel software and hardware.
The software component of FakeCatcher is an amalgamation of specialized tools, including AI-focused face and landmark detection algorithms and a toolkit for real-time image and video analysis. This innovative system functions on a 3rd Gen Intel Xeon Scalable processor, which can manage up to 72 separate detection sequences concurrently.
Contrary to traditional deep learning-based detectors that seek signs of inauthenticity by scrutinizing raw data, FakeCatcher employs a unique method that focuses on defining authentic, human markers within real videos. For example, it identifies changes in a video’s pixels that signify blood flowing through the veins of the face, and its algorithm converts those changes into spatiotemporal maps. Then, with the application of deep learning, FakeCatcher can discern if a video is real or doctored.
Reality Defender is an enterprise-level solution for detecting and neutralizing deepfakes in video, audio, and image content. The product leverages advanced deepfake and generative content fingerprinting technology for comprehensive and proactive scanning, providing organizations with actionable insights and detailed reporting to safeguard against deceptive digital content.
A notable feature of Reality Defender is its Generative Text Detection, which can identify AI-generated text. Capable of scanning content from numerous language models such as ChatGPT, Bard, and Bing AI, the tool offers quick, actionable results, displaying the likelihood of text manipulation. Users have the option of detecting text via the web application, or by using the Reality Defender API to upload multiple files simultaneously.
Expanding its utility beyond text-oriented deception, Reality Defender also offers anti-fraud tools for voice scanning and document detection. These features enhance an organization’s fraud prevention arsenal, allowing calls to be scanned in real-time, verifying the authenticity of various media, and offering user verification tools.
Reality Defender’s detection technology employs state-of-the-art models along with prospective technologies, balancing present security needs with forward-thinking strategies. The technology is continuously informed by the work of premier machine learning and computer vision research teams, ensuring up-to-date protection.
Read Next: Our interview with Reality Defender CEO Ben Colman
Sensity is an AI-driven solution designed for the efficient detection of deepfake content such as face swaps, manipulated audio, and AI-generated images. The technology behind Sensity prioritizes swift and cogent identification, enhancing security and reducing the workload for analysts.
Sensity helps safeguard the integrity of user’s identities in digital interactions by effectively identifying and flagging impersonation attempts, providing advanced protection against deepfakes and serving as a robust line of defense. Sensity can also bolster security in Know Your Customer (KYC) processes through the use of its Software Development Kit (SDK) integrated with the Face Manipulation Detection API. This helps offer invaluable defense against attempts of identity theft using advanced face swap techniques.
Sensity’s algorithms cater to a broad range of forensic checks, applicable to all audiovisual content. This versatile platform provides services like accurate face matching, even in non-optimal conditions, liveness checks to prevent AI-powered spoofing attempts, fraudulent document detection, and ID document verification. It also offers integrations with other third-party platforms via API.
Sentinel is a technology company that specializes in developing artificial intelligence-powered detection platforms. These platforms are utilized by government entities, defense agencies, and media organizations for identifying and combating against disinformation campaigns, synthetic media, and informational operations.
The key technology of Sentinel is based on an advanced AI detection system, specifically designed to recognize deepfake videos and audio with a high degree of accuracy. This detection process is driven by advanced neural networks that analyze factors such as facial expressions, blinking patterns, and audio manipulations. Deep learning models are utilized to further unmask deepfakes by analyzing a range of biological signals, facial recognition data, and audio data. Convolutional neural networks also contribute to the precise detection potential of the system, enabling effective recognition of manipulated images and videos.
Sentinel’s deepfake detection process is very quick and easy-to-use, and follows a simple workflow—the user uploads media to the platform via an API or the company’s website, the system automatically analyzes the input for AI forgery, determines if the content falls into the deepfake category, and then provides a visualization of any discovered manipulation. To maximize detection accuracy, the system employs a multi-layer defense approach that incorporates real vs manipulated face classification, a vast database of verified deepfakes, AI-generated audio classification, and an ensemble of neural network classifiers.
WeVerify is a company focused on tackling challenges related to advanced content verification. Its Deepfake Detector solution scrutinizes social media and web content to detect disinformation and deceptive or fabricated content. This can then be exposed through a blockchain-based public database of known fakes. It does this via a participatory verification strategy, open-source algorithms, in-the-loop machine learning with minimal human intervention, and easy-to-understand visualizations.
Deepfake Detector analyzes images or videos and gives a probability score indicating whether the media contains faces manipulated through deepfake techniques. When working with videos, each shot is segmented, and probabilities are assessed for each frame. The final deepfake probability score at the video level is determined by the shot with the highest deepfake probability.
Deepfake Detector is a decentralized platform for collaborative content verification, tracking, and debunking. It caters to communities, citizen journalists, newsroom, and freelance journalists, and can be seamlessly integrated with in-house content management systems via WeVerify’s API. It can also be used as a plugin.
Deepfakes are a type of synthetic media created using computer processing and machine learning techniques (specifically “deep” learning). These techniques are used to either create entirely new content or to alter existing content, to make it appear as though someone did or said something that they actually did not.
Deepfakes commonly involve the substitution of one person’s face onto another person’s body in a video or image. This technique is called “face swapping” and is achieved using a deep neural network called a generative adversarial network, or “GAN”. GANs use two machine learning algorithms to create deepfakes: one creates the image, and the other tries to detect it. When the detection algorithm identifies the deepfake, the first one improves it to try to get past the detection—this goes on until the creation algorithm defeats the detection algorithm by creating an image that’s virtually impossible to identify as being fake. Using a GAN, individuals can create images with realistic facial expressions, lip movements, and other non-verbal cues that make it very difficult to distinguish the manipulated content from an original.
But deepfakes don’t always involve the creation of a fake image, they can also involve voice synthesis. In this type of deepfake, voice cloning technology is used to create audio content that realistically mimics the speech patterns and intonations of a specific individual.
While deepfake technology does have positive applications, such as being used to create special effects in the entertainment industry, it can also be used by threat actors to create deceptive content that manipulates public opinion, spreads misinformation, and impersonates individuals (e.g., to convince company employees to transfer money to fraudulent accounts).
And unfortunately, the process of creating deepfakes is becoming increasingly accessible, with numerous out-of-the-box image synthesizing tools readily available on Github.
Deepfake detection solutions are tools designed to identify deepfake content. Typically, they use deep learning algorithms, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), to analyze patterns and anomalies in media content and identify signs of manipulation. They may also use image and video analysis tools that look for inconsistencies in facial expressions, lighting, and lip sync, audio analysis tools that identify unnatural speech patterns, anomalies in voice characteristics, or inconsistencies in the audio track, and behavioral analysis tools that focus on anomalous behavioral aspects, such as eye movement and facial expressions.
In addition to analyzing the content itself, some deepfake detection solutions use forensic analysis tools and blockchain or watermarking to identify signs within its metadata that the content may be fake. Forensic analysis tools examine the digital fingerprints and artifacts left behind during the creation of deepfake content, while blockchain technology and digital watermarking can authenticate and verify the origin of media content. This helps ensure the integrity of the content and prevent unauthorized modifications.
By using techniques like these to study the patterns used in the generation of deepfakes, deepfake detection solutions can more accurately identify anomalies that distinguish manipulated content.
Deepfake detection is an emerging category of cybersecurity technology. Many of these tools are still in beta or early stages of development, which means that their features sets are still evolving and may vary between different solutions. However, there are some features that are common to most deepfake detection solutions, and which you should consider when comparing tools. These include:
It’s also important to remember that the best protection against deepfake threats combines detection technologies such as those featured in this list, with user education. If this is something you’re not yet investing in, check out our guide to the best security awareness training platforms for business.
Caitlin Harris is Deputy Head of Content at Expert Insights. Caitlin is an experienced writer and journalist, with years of experience producing award-winning technical training materials and journalistic content. Caitlin holds a First Class BA in English Literature and German, and provides our content team with strategic editorial guidance as well as carrying out detailed research to create articles that are accurate, engaging and relevant. Caitlin co-hosts the Expert Insights Podcast, where she interviews world-leading B2B tech experts.
Laura Iannini is an Information Security Engineer. She holds a Bachelor’s degree in Cybersecurity from the University of West Florida. Laura has experience with a variety of cybersecurity platforms and leads technical reviews of leading solutions. She conducts thorough product tests to ensure that Expert Insights’ reviews are definitive and insightful.