AI Solutions

Interview: Stephan Jou On The Ethics Of AI—Uncovering Security And Privacy Challenges

Stephan Jou, CTO of Security Analytics at OpenText Cybersecurity, discusses the ethical and security risks associated with AI.

Stephan Jou

Top cybersecurity providers have been utilizing AI and ML for years to improve threat detection and remediation rates, particularly when it comes to unknown and zero-day attacks. But over the past year, AI has become more accessible, with Microsoft’s investment in OpenAI and the chat interface of GPT-3 catalyzing the rushed release of thousands of other consumer-facing AI-powered apps.

But while these new tools enable us to access information faster than ever before, they’re also surrounded by growing security and ethics concerns. What’s to stop an AI text generator from ingesting and sharing confidential personal data? How can we be sure that the information being generated is accurate and unbiased? And how might manipulative threat actors start utilizing some of these readily accessible tools to carry out more efficient—and more successful—campaigns?

“A lot of the innovations have not come from the big companies that are producing these large language models—they’re actually coming from the open source and research communities that are making these models accessible to everyone. And there’s literally no difference between the tool that is used for good versus the tool that’s used for evil,” says Stephan Jou, CTO of Security Analytics at OpenText Cybersecurity.

Stephan has over 30 years of experience in artificial intelligence and machine learning, the last decade of which he’s spent focusing on applications of AI within cybersecurity. In his current role at OpenText Cybersecurity, Stephan leads efforts to apply AI and analytical methods to solve the most pressing cybersecurity problems that businesses are facing. 

In an exclusive interview with Expert Insights ahead of Black Hat 2023, Stephan discusses the ethical risks and security implications of AI, how cybercriminals may utilize AI and ML to launch more efficient and effective cyberattacks, and what the industry needs to do to ensure the ethical behavior—and responsible use—of AI globally.

You can listen to our full conversation with Stephan on the Expert Insights Podcast

The Benefits Of AI: Fast, Effective Threat Detection 

Many cybersecurity providers have been utilizing AI and ML in their solutions for years now to help them detect and respond to threats—particularly insider threats. Insider threat refers to a cybersecurity risk that originates from within the organization. This usually involves an employee, ex-employee, or contractor that has legitimate login credentials, and is using those credentials to cause damage to company data, systems, or services. This can also include external actors who have compromised company credentials, and are using those accounts to do harm.

Traditional access-based controls struggle to detect insider threats because the malicious actors are authorized users, and they’re accessing data that they have the authority to access. AI, however, can help identify insider threats by analyzing anomalous behaviors and events across the organization, based upon a baseline of “normal” activity. 

“AI is really good when you don’t need to constrain it with a hard-coded set of rules,” says Stephan. “I’ve seen some stunning examples of human creativity where someone wanted to steal source code from [a technology company]. And instead of taking the source code and copying it to a USB key, for example, they scrolled through all the source code files screen by screen, they took screenshots of the source code, and then they mailed the screenshots to three separate Gmail accounts. 

“They did that to try and sneak around any binary, rule-based system, but the AI that we had built into a product called ArcSight Intelligence at the time, was able to see it because it was basically an unusual sequence of events that happened at an unusual time, with strong connections to—in this case—data exfiltration.”

AI Is An Extension Of The SOC

While it’s clear that AI brings a lot of benefits to the cybersecurity industry, some worry that it may eventually replace security professionals—threat-hunting SOC teams, in particular—altogether. Stephan, however, argues that AI should extend a SOC team’s capabilities by automating a lot of the time-consuming, tedious tasks that many teams don’t have the resource to undertake manually, such as processing alerts and combing through large volumes of data to find specific indicators of compromise. 

“I’m fond of saying that math is magical, but it’s not magic; we’re not doing anything that a human—when sufficiently skilled and given enough time—couldn’t do themselves. But when you talk to people that are running security operations teams, they often talk about having not enough staff, not having enough hours in the day. And they’ll talk about this human capital crisis where there’s so much data coming in, there’s such a high volume of alerts that they need to process, that they simply do not have enough time to manage all of the alerts that they need to process, or to comb through all the data. 

“That’s where math really helps. Math doesn’t need sleep, it doesn’t need coffee—it can be running 24/7, looking through all the data that you have, combing through the noise, and finding all those little subtle clues that you need to stitch together to be able to find the true threat that’s attacking your organization.”

So, rather than having to choose between human-centric and AI-powered threat hunting, we should be focussing on the “human-machine partnership,” says Stephan. 

“Chess used to be a hallmark for AI, if you could build an AI to play chess effectively, then you’d have won AI and equalled human intelligence. Of course, we’ve learned that that’s not actually an effective marker for human intelligence—everyone accepts that a human playing chess is not nearly as good as a program playing chess. But what’s interesting about that story is, the best chess in the world is something called ‘Centaur chess’, which is actually teams of human grandmasters playing chess with all the AI chess tools, side by side. And that’s because these AI tools are essentially very good at tactics and not making mistakes, whereas humans are really good at creativity and imagination. And it’s that combination—that partnership—where we really shine.”

The Ethics Of AI: The Challenges And The Solution 

While many cybersecurity companies have been using AI for a while now, it really hit the headlines this year as leading AI developers have released open APIs which have accelerated the release of generative AI tools, and enabled forward-looking organizations to start training their own proprietary AI models using confidential internal data. However, some experts are concerned that these tools are being released too quickly, without enough regard for the ethics surrounding them.

According to Stephan, there are two main ethical concerns when it comes to generative AI, the first of which is around the data that the machine is trained on and presents to its users. 

“Data issues are typically related to things like privacy and informed consent—do we have legitimate legal access to that data? […] and around access control—do you have the authority to access that data?” Stephan explains. “For example, if you have HR data that is only allowed to be seen by people on the HR team, and you have a model trained on that data, how can you control who has visibility into any of the predictions related to that data?” 

The second issue, Stephan says, concerns the transparency and reliability of the AI model itself. 

“Everyone talks about the algorithm itself,” he explains. “So, they will talk about issues like hallucinations, in the case of large language models. But beyond the algorithm and the technical issues, there’s much more focus on the ethics—on the transparency of the model, the explainability of the model, and bias and drift. 

“Those are real issues that are less technical, much more societal, but they’re increasingly important because they’re much more tied to a growing set of compliance requirements from the various countries.”

Solving these ethical challenges requires a multi-pronged approach, with input not only from the organizations building AI technologies, but also the organizations using them, and governments. 

“The organizations that are building these AI technologies are actually doing a great job,” Stephan says. “They have AI safety teams, they’re investing a lot in putting the right guardrails in place so that their technologies cannot be used for malicious operations. They’re also proposing guidance for countries to take into account, and there will almost certainly need to be some sort of government involvement. 

“Unfortunately, just like how there’s a need for governments to step in to prevent counterfeit money from being printed, they will likely need to do the same thing in this area as well. Being able to clearly identify anything that was generated by AI, being able to have very harsh and commiserate penalties associated with the nefarious use of AI technologies—that needs to be part of the solution. It’s not the only solution but it needs to be a part of it. 

“What organizations can do for their employees, unfortunately, comes down to a lot of education. And I say ‘unfortunately’ because that is something that is not new. […] We need to make sure that people are aware of what makes sense, what doesn’t make sense, what these technologies can do and what they can’t, and how they can be used responsibly.”

AI In The Hands Of The Adversary

As well as ethical issues, there are also some big security questions being raised on the topic of AI—namely, what happens when the adversary gets their hands on these tools? A lot of large language models and generative AI tools are developed by open-source communities that make the models accessible to everyone—which means they’re also available to cybercriminals. 

“There’s literally no difference between the tool that is used for good versus the tool that’s used for evil,” says Stephan. “So, we can’t really change the technology to make it only benefit one party versus the other.”

Some experts predict that we won’t necessarily see new types of attack as cybercriminals begin utilizing generative AI tools, but that cybercriminals will use AI to make their existing methods—such as writing a phishing email or ransomware code—more efficient. 

Most attacks are financially motivated,” says Stephan. “They’re trying to earn more profit. […] If you go back to my earlier thesis that AI is all about automation, basically, this automation will allow cybercriminals to be more effective in their attacks at a lower cost. By increasing efficiency and ease and decreasing costs, all of a sudden, you have much more to gain from a profit perspective than before. 

“Threat actors are absolutely already using these tools to write better phishing emails, to scale it out, to do mass, very targeted spear phishing in a way that doesn’t require humans to do every step in that very manual process. 

“It’s the same with ransomware—using AI to basically make ransomware more effective and sticky, and easier to be injected into an environment is absolutely worth focusing on. It’s just economics.”

AI To The Rescue

As Stephan explained, there is good news; there is literally no difference between the AI technologies used for evil versus those used for good. “The same sophisticated and powerful methods used by adversaries to attack, are also being used by the good guys to detect and defend. With the right awareness of ethics and responsible use, AI can become an important and powerful weapon for us in our human-machine partnership and in our quest to catch more bad guys.”


Listen On Spotify:

Listen On Apple Podcasts:

About Expert Insights

Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions. You can find all of our podcasts here.