AI Solutions

How Organizations Can Embrace GenAI Without Compromising Security

Expert Insights interviews Alastair Paterson, CEO and Co-Founder at Harmonic Security.

Expert Insights Interview With Alastair Paterson Of Harmonic Security

The data privacy concerns surrounding GenAI tools are part of a wider data privacy issue, Alastair Paterson, CEO and Co-Founder at Harmonic Security, tells Expert Insights. But there is a way for organizations to embrace new technologies without sensitive data leaving the business—whether that’s via ChatGPT or any of the other thousands of GenAI tools in use.

We caught up with Paterson on this month’s episode of the Expert Insights Podcast. Read on for the highlights, and you can listen to the full episode here.

The Big Picture: The use of GenAI has exploded in recent years, with many users embracing these technologies to help them work more efficiently, reduce operational costs, and free up their time for more creative work. However, not all organizations are ready to embrace GenAI; in fact, 58% of CISOs believe that the risks of AI outweigh its potential benefit.

Driving The Issue: 77% of organizations say data privacy is one of the biggest barriers to their adoption of GenAI. There are two reasons for this: the first is that GenAI models may store prompt data or input data in an unsecure or non-compliant way. This is a challenge that most organizations using third-party apps will be familiar with.

The second, more complex, reason is that many free GenAI tools utilize users’ data to train and improve the service—which can cause both security and compliance issues if users input sensitive, personal, or corporate data.

  • “Across about 8,000 applications that we looked at, about 30% of them explicitly train on the data that’s put in. So, if you’ve got critical IP about your new product releases, figures that aren’t released to the market yet, customer data, or anything else that’s critical to the business and it goes into someone else’s model, the risk is that it then becomes accessible to others using that same model over time.”
  • “Even if you’re paying for a license, your employees might be logged in with their own personal accounts and still putting your corporate data into a personal edition of those types of AI applications.”

What’s The Solution? Putting a blanket ban on GenAI tools simply doesn’t work, says Paterson.   

  • “A pure blocking approach is probably not going to catch everything and may end up frustrating employees, leading them to use their own devices or forward things to themselves.”

Instead, the answer to safely allowing the use of GenAI in the workplace lies in a different kind of AI—one that can identify sensitive data and stop it from leaving the organization.

  • “[At Harmonic] we’ve been building our own language models specifically focused on data protection […] We sit in line and look at sensitive data leaving the enterprise […] and rather than fire a load of alerts at some poor analyst […], we’re accurate enough that we can sit in line with the employees and actually coach them and nudge them towards a safe way of getting their job done instead of blocking them.”
  • “It’s great from an employee’s perspective because it means they’re not getting blocked all the time; they can actually get on with their job unless they’re putting the company at risk.”
  • “From a security team’s perspective, we’re calling it ‘zero touch data protection’ because it’s so lightweight on the security team.”

The Bottom Line: “Pretty much every company we speak to is trying to figure out how to tread the line between getting left behind, but also running the risk of adopting some of this technology. I think it’s a pretty unique time for security where we can actually be the heroes for once and not just seen as this cost center that people don’t want to talk about. […] We’re potentially an enabler for GenAI.”

Final Advice: Paterson’s advice for security and compliance teams looking to get more control over GenAI applications is to focus on education and coaching, and then controls.

  • “Yes, we need a policy, but policy alone isn’t going to solve this.”
  • “The way to approach that is to really understand the business drivers—talk to colleagues, understand the use cases that different teams have got, and then make sure that there are secure ways of meeting that with the technology we’re deploying.”
  • “[In terms of controls] the first bit is [gaining] visibility into what our employees are doing, what types of use cases they’re pushing into these tools, which tools they’re using. And then [working out] how we can get our arms around that and put sensible restrictions around it without blocking everybody.”

Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions with confidence.

For more interviews with industry experts, visit our podcast page here.