AI Solutions

Jamie Moles On How Businesses Can Secure And Govern The Use Of Generative AI Tools 

Jamie Moles, Head of Technical Marketing at ExtraHop, discusses the findings of ExtraHop’s recent report, The Generative AI Tipping Point.

JamieMolesInterview

Since the public release of OpenAI’s ChatGPT just over one year ago—and the subsequent release of numerous other generative AI tools—many users have embraced the productivity benefits of generative AI and large language models (LLMs). We’re now seeing the popularity of these tools increasing not only in the consumer space, but also within the workforce. However, though end user adoption is high, many IT and security leaders share concerns about the use of these technologies in the workplace. 

According to recent research from ExtraHop, the top concerns include the prospect of receiving inaccurate or nonsensical responses, the exposure of customer and employee PII (and any subsequent compliance violations), the exposure of trade secrets and any associated financial loss, and concerns about biases. But despite these concerns, less than half of organizations are currently monitoring their employees’ use of AI, only 46% have acceptable use policies in place, and only 42% train their users on the safe use of these tools.

“The vast majority of organizations are noting that [Gen AI] is useful,” says Jamie Moles, Head of Technical Marketing at ExtraHop, a leading network detection and response (NDR) provider. “[Gen AI] is going to uplevel our staff to make them even more amazing than they are already, and we need to make use of this technology […] because if we don’t, our competitors will, and we will lose our competitive edge in the market.” 

“But we also need to manage the risk because it’s like Icarus. You risk flying too close to the sun. If you’re not careful, you’ll crash and burn.” 

Bringing over 35 years of experience in IT, with a particular focus on infrastructure and security technologies, Jamie is a seasoned thought leader in the cybersecurity arena. In his current role at ExtraHop, Jamie is responsible for helping ExtraHop’s customers better understand the risk and challenges they’re facing, while helping minimize the time it takes them to respond to cyberthreats. 

In an exclusive interview with Expert Insights, Jamie discusses the findings of ExtraHop’s recent report, The Generative AI Tipping Point, focussing in particular on the importance of governing the use of generative AI tools, and whether this a challenge that businesses will have to tackle alone.

You can listen to our full conversation with Jamie on the Expert Insights Podcast

Data Leakage Is The Top Gen AI Concern

As the popularity of generative AI tools increases, so too do the concerns that IT and security leaders have about the security of these technologies. When ChatGPT was initially released to the public, many security professionals were concerned about the potential for threat actors to use it to quickly write targeted spear-phishing emails or ransomware code—in other words, we would still be facing the same threats, but potentially more of them. However, the platform’s developers released an update that prevents ChatGPT from generating malicious content.

“A lot of that capability is now gated on the platform,” says Jamie. “It won’t allow you to create security threats anymore. But it’s not the only gen AI tool or LLM out there—there are others on the dark web that you can access, and there are other platforms like Google Bard and Microsoft Copilot, etc.”

But while OpenAI and other Gen AI providers are addressing this concern, IT and security professionals still have doubts about the security of these technologies.

“Nearly 82% of the respondents to our survey said that they were confident that their organization’s current security stack could protect them from threats from generative AI tools. But on the flip side, 74% were planning to invest in generative AI security measures,” says Jamie.

The reason for these conflicting statements, suggests Jamie, is that organizations are confident in their ability to defend against AI-generated or AI-assisted attacks that are the same as ones they’re already equipped to deal with. For example, that their email security solution can pick up on AI-generated phishing attacks, or that their EDR tool can block AI-generated malware. However, they’re less confident in their ability to protect against the risks that are introduced when their own users start using these tools. 

“The very first [risk] that is of concern is the risk of leakage of data or company-specific information. For example, there’s an AI platform called Tome, which has a really nice utility for building presentations. When it first came out, I went onto it and gave it a one sentence command: ‘Please can you generate for me a slideshow talking about the NDR market.’ It produced a really good presentation of about seven to eight slides that went into the details of NDR, ExtraHop’s competitors, the market share, the roadmap for the future of the market—and that was just with me not providing it with anything other than a question. But these tools also allow you to upload your own content, and this is where the immediate risk lies.”

Many Gen AI technologies enable users to input their own data, which the tools then use to generate carefully tailored content for the user. But once the user has added their data, they can’t get it back—it becomes part of the LLMs database. This means that the tool can use that user’s data to create content for other users. 

“Very senior leaders in finance and technology are telling us they’re concerned around this very significant threat, from the leakage of personal data and sensitive corporate data into these models,” says Jamie. “And users won’t even know that they’re doing it.”

Businesses Need To Embrace Gen AI Use—And Audit It

In an attempt to prevent this risk from materializing, nearly one third of organizations have banned the use of generative AI tools outright. But prohibition is not an effective means of governing the use of Gen AI, says Jamie. 

“Anybody who’s been in the IT industry for any significant period of time, or has worked as an IT administrator will tell you, [banning tools] is not feasible anymore,” Jamie says. “We’ve moved on from the days of blocking access to systems on the internet because we are an internet generation—business is done on the internet nowadays. And [Gen AI] tools are based on the internet. 

“Now, you might be able to block certain services at the firewall or at the proxy, and you might be able to block ChatGPT, Google Bard, and others by blocking access to the domain, but there are so many tools out there, and so many more coming, that the maintenance effort required to keep blocking these will become unmanageable very quickly. I think we may even be at that point already.”

Rather than banning Gen AI tools, organizations should be setting out acceptable use policies, ensuring that users understand those policies and the risks of leaking sensitive data on these platforms, and audit their use, Jamie says. By monitoring data transfers on the network to and from Gen AI sites, organizations can identify which users are uploading company data to them, then address those individuals to make sure they understand the risks involved in how they use the tool.

There are three main technologies available that enable organizations to do this, says Jamie. These are proxiesfirewalls, which can monitor which sites users are connecting to on the internet so you can identify which users are visiting Gen AI tools and how often, and security appliances

“Security appliances, like our own product Reveal(x), have the ability to not just tell you what sites users are going to how long they use that site for or how many times they access it, but it will also measure the bytes in and bytes out,” says Jamie.

“If you go to a website like Facebook or Sky News, the usual profile of activity is up to 100 or so bytes out from you to that website, which is just you entering a website’s URL, and then you get megabytes of data flowing back in the opposite direction, as you get the page and you get the photos and the graphics and everything. It’s a very unequal communication. 

“If you go to a large language model site—or actually any site on the internet where you’re sending data out—if you’re sending out large amounts of cut and paste data, files, graphics, or anything else like that, that balance is going to be more equal, or it might even be heavier in your favour. So, when you monitor sites like Google Bard, ChatGPT, Tome, Microsoft Copilot—any site where you have the ability to upload information to—if you’re keeping a track of the bytes in and bytes out metrics on the network, you’re in a position to say, ‘The vast majority of users are not really sending much out, so the only text that’s going out is the questions that they’re asking of the platform.’ But if someone sends files out, it’s going to be megabytes.

“So, just by being able to map bytes in and bytes out metrics to usernames and to domain names of these sites, it’s actually really easy to audit the usage of these sites and see if you’ve got a risk or not.”

Guidance Vs. Regulation 

In the face of these challenges, a huge majority (90%) of organizations globally— including 75% of tech companies—say they want the government involved in some way in the regulation of AI. And as Gen AI and LLMs continue to advance, we are seeing governments start to provide recommendations to the public. On October 30th, the Biden-Harris Administration issued an executive order that establishes new standards for AI safety, security, and privacy. Around the same time, 28 nations attending the AI Safety Summit agreed that there’s an urgent global need to manage the safe and responsible development of and deployment of AI tools.

But while legislation can be useful, we should instead be looking to tech companies to self-regulate, says Jamie, in order to let these new technologies realize their full potential. 

“We don’t want too much regulation, and we don’t want heavy handed regulation,” Jamie says. “We don’t want too many limits placed on this technology as it’s blossoming, as it’s just starting to come into its own. This is a time for innovation, and it’s time for the capabilities of these solutions to develop and really start to bring benefits to society. 

“I think a lot of the fear around AI comes down to things like the Terminator movies, where people think of AI, and they think of Skynet launching nukes and blowing up the whole world and going to war with the human race. That’s not even remotely likely, with the kind of technologies we’re talking about today. For a start, these are technologies that you run on a box—they’re not plugged into other systems. Having AI make decisions about interacting with the outside world is probably not smart, but having it provide advisory services and support services to business—based on the fact that the AIs can process a lot more data a lot faster than we as human beings can—is very useful. 

“So yes, some regulation, at some point down the path is probably going to be wise. But I would hate to see too much regulation this early on in the game.”

Rather than regulation, says Jamie, organizations would benefit from governmental guidance similar to what’s already out there in terms of data privacy and protection.

“Regulation shouldn’t control [what types of data users can input into LLMs], because there are going to be times when businesses need to share types of data with models that the regulation might block. Just because I’ve said we shouldn’t share sensitive data with the models, that’s not going to be a blanket statement. There will be times when businesses make a determination themselves that they do want to share this kind of information with the models to see what they come back with.”

“We have GDPR and HIPAA and other regulations that tell you you’ve got to protect your customers’ information and your employees’ information […] There’s enough regulation regarding data leakage in general already. What needs to change is, this needs to start to take into account new vectors for data leakage, which Gen AI potentially is. And there needs to be guidance. So, I need to say, ‘Look, first thing you need to understand is that there is a risk of leaking data here—if you didn’t already know that, you need to know it. And therefore, you need to be monitoring this very carefully—in ways that you do with other vectors for leaking information—and add it to your training so that users know the risk as well.’”

The Future Of AI Holds Huge Potential—And More Risk

Looking towards the next few years, there’s a lot to be excited about when it comes to Gen AI, says Jamie. 

“I’ve subscribed to the paid model of ChatGPT and the new version is really cool because it has internet access,” Jamie says. “As an example—perhaps this is the vanity of a security professional—I went to the new version and I asked it about me. I said to it, ‘Tell me about Jamie Moles.’ And because it went to the internet, it told me all about myself and the work I do at ExtraHop, and a little bit about my history, which was really cool. 

“Imagine taking that ChatGPT model that has access to the massive data source that is the internet, and being able to use that locally on your machine and have it look at your data locally—look at the way you work, look at other sites that you use and things like that—and begin to understand you as a person, the work you do, and how you like to do things. It could become a very useful, very personal assistant. And having the ability to go out by itself and integrate with other LLMs and Gen AI tools to get the most out of them on your behalf—I think that’s going to be really exciting. 

“But I also think that’s going to be something that means we’re going to have to really ramp up the monitoring and the auditing. It may even get to the stage where we have an AI monitoring and auditing other AIs to make sure that they’re not doing things that are risky. And then it’s going to be an interesting world.”

Listen On Spotify:

Listen On Apple Podcasts:

About Expert Insights

Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions. You can find all of our podcasts here.