RSAC 2024

The Danger Of AI Deepfakes With Reality Defender CEO Ben Colman 

Expert Insights interviews Ben Colman, CEO and Co-Founder at Reality Defender.

Interview Ben Coleman

Fake news is a problem we’ve all become familiar with over the last decade. But the rise of generative AI is turning manipulated media into a billion-dollar industry.

Using commercial AI tools available today, you can take a few video and voice samples of an individual to generate a ‘deepfake’ image or video of them saying whatever you like.

These deepfakes are so realistic, even the developers of the software say they are unable to tell what’s real or not. But there can be no doubt about the severity of the dangers posed by AI deepfakes. 

Earlier this year, a finance worker in Hong Kong transferred $25 million USD to fraudsters who used deepfake technology to impersonate his CFO, CEO, and others on a Zoom call. LexusNexus estimates that AI-assisted fraud schemes could cost US-taxpayers over $1 trillion USD in the next year.

Ben Colman is the CEO & Co-Founder of Reality Defender, a US-based company that provides AI deepfake detection. Reality Defender was recently named the winner of the RSAC Innovation Sandbox for its innovative solution that detects disinformation, fraud, and AI content in real time.

Listen – Ben Colman, CEO & Co-Founder Of RealityDefender On the Expert Insights Podcast

Reality Defender helps organizations detect deepfakes and manipulated images, video, audio, and text. The solution uses multiple interconnected inference models, similar to an antivirus tool, to analyze several indicators of techniques used to create deep-faked media.

The platform uses a probabilistic approach to create a confidence score for media, looking at factors like background noise, compression, and pixelation. They assume they do not have the ground truth, and that the files they analyze have been edited and manipulated many times.

One of the core markets for Reality Defender is the financial services industry, where voice fraud scams are becoming more common. “A few years ago, you were told your voice is your password,” Colman says. “Now, with off the shelf tools, you could make a deep fake of me with less than ten seconds of audio.”

Reality Defender helps prevent this issue by helping call centers and anti-fraud teams identify simulated and synthesized callers in real-time. But it’s not just financial services. “Every single industry vertical needs a platform like Reality Defender,” Colman says.  

Regulating AI Deepfakes

Colman recently testified to Congress about the need for strong regulation to counteract manipulated media and deepfakes. He told the Senate Subcommittee on Privacy, Technology, and the Law: “We must mutually agree… that the deepfake threat is a threat to everyone… and we must act quickly and at the same speed that AI progresses, lest we be taken by surprise by new attacks… on truth.”

Watch – Reality Defender – RSA Conference 2024 Innovation Sandbox 

Reality Defender fully supports AI innovation, Colman says, but there is a need for regulation that “allows for innovation, while at the same time limiting this very slim minority of use cases that have disproportionate risks for the world.” 

It’s expected that deepfakes will play a role in upcoming elections this year. In the UK, a viral deepfake audio clip of Leader of the Opposition Keir Starmer has been dubbed ‘UK Politics First Deepfake Moment’. Academics have already called for people to be taught how to spot AI deepfakes ahead of the general election, called last week.

Deepfakes are already playing a role in US elections. In 2020 a manipulated video purported to show President Joe Biden falling asleep during a live interview. During the primary campaigns, a deepfake robocall of Biden recently instructed thousands of New Hampshire Democrats not to vote in the primary election. 

The Reality Defender team is currently working with a number of different public, private, and research teams to investigate deepfake situations across geopolitics, Colman says. Reality Defender is “Proactively engaged ahead of time. A lot of our models are forward thinking, in the sense that they’re judging things we haven’t seen yet, in terms of deepfakes. Our goal is to identify them, before they go viral.”

Where Next For Deepfake Detection?

Colman sees the trajectory of deepfake detection as following a similar path to antivirus technologies, in terms of moving from scanning specific files, to providing continuous protection in the background, 24/7.

“Our goal is to follow a lot of the same growth story as antivirus solutions. These platforms started in the 80s, early 90s, and allowed individuals to pick which files they wanted to scan to see if there’s a virus. If you remember, companies or schools told us to log out at six o’clock on a Friday, because it’s going to take over our computer to do updates and check for viruses.”

“But now [malware scanning] is everywhere. You only realize that your computer’s scanning for a virus, because your Gmail or Outlook tells you that it has blocked an email.”

“We’re very much in the first chapter there… but within the next two years, if not sooner, we will run a lot of our computation on devices themselves, which will allow our clients (hopefully) to scan everything, all the time. Across all media, all communications, all internet browsing. But we’re still a few months away from doing that.”

And will deepfakes still be a fundamental security issue we talk about decades on, in the same way viruses are?

“100%. I think we’ll look back and say, ‘Wow. I can’t believe we had this issue. Now we have regulations here. But also, if you think about it, [we’ll be talking about] a lot of the good uses of generative AI.”

Learn more about Reality Defender.


Listen On The Expert Insights Podcast

Apple Podcasts


Spotify


About Expert Insights

Expert Insights is a B2B research and review platform for IT solutions and services. We help over one million IT managers, CISOs, small business owners, and other professionals discover the best IT and cybersecurity solutions.