Jennifer Kady On How Security Can Make Or Break The Pace Of GenAI Adoption
Jennifer Kady, Vice President of Security Sales at IBM Security, discusses the main GenAI concerns that security leaders should be aware of, and how developers can address them.
Generative AI applications can deliver significant returns on investment in terms of time and resources, particularly when it comes to writing code and developing other apps. However, the collaborative nature of developing GenAI models, which often relies on open-source contributions and crowdsourcing, can introduce vulnerabilities such as backdoors or the potential to inject malicious code into the model. As such, it’s crucial for the developers of LLMs and GenAI tools to implement robust security measures to address vulnerabilities and biases in their models. Only by putting these controls in place can they foster the trust necessary to fully realize the potential of GenAI tools and accelerate their widespread adoption.
“My mother can use GenAI, and my 13-year-old can use GenAI for developing new Python code—that’s phenomenal!” says Jennifer Kady, Vice President of Security Sales at IBM Security. “So, it’s not going to be going away. But how we use it and what the guardrails are in terms of trust, that’s the next bastion that we need to be considering with GenAI.”
IBM Security provides intelligent cybersecurity solutions and services that enable organizations to align, more easily manage, and modernize their cybersecurity strategies and infrastructure.
In an exclusive interview with Expert Insights at the 2024 RSA Conference in San Fransisco, Kady discusses the top business drivers surrounding AI projects, the main GenAI concerns that security leaders should be aware of, and where the responsibility lies when it comes to securing GenAI.
Note: This interview has been edited for clarity.
Could you please introduce yourself and tell us a bit about your security background, and your current role at IBM Security?
My name is Jennifer Kady. I’ve been with IBM for almost 25 years, and held a number of different roles. Most recently prior to this one, I was in our data team. So, I was part of the inception of our Watson X programme. I then came over to security, and I now lead our sales organization for America—everything from Canada through to Argentina is in my purview. I also support our software and expert labs services.
My goal is to work with our clients who are looking to secure their AI portfolio and platforms, and also work with our other brands on what we’re doing as a total collective within our company.
Generative AI is a hot topic for IBM Security here at this year’s RSAC, and in particular how security can make or break the pace of GenAI adoption. Before we jump into that, what are some of the top business drivers surrounding AI projects, and why have GenAI tools become so popular amongst organizations in the past 18 months?
Time is a non-renewable; I can’t get it back. So, the quicker that I can get something to market, the better off I am in terms of not just productivity, but also revenue and, ultimately, my brand in and of itself. GenAI levels so many different playing fields; it allows a number of different types of developers to have access to controls that they might not have had in the past. It also gives ubiquitousness to what I might be able to do as a company in terms of projecting a new application, how I want to go to market, or what I want to be doing in terms of new studies or research that I’m performing. It levels the playing field, it gives more people more access, and it gives a heck of a lot better return on investment when it comes to time.
But there are a lot of safeguards that need to be put into control there too, because, more often than not, you’re talking about open-source development and crowdsourcing that’s potentially taking place as far as models being created. So, while there’s an excellent flow when it comes to leveraging and using GenAI, there are also some controls and there’s some newness to it. And security isn’t always at the forefront when teams are developing a new application. Because of that speed to market, security’s not always the priority.
But my mother can use GenAI, and my 13-year-old can use GenAI for developing new Python code—that’s phenomenal! We’re talking about multiple generations of individuals who are able to do more than we were able to do two years ago. That’s pretty exceptional in terms of new technology, so it’s not going to be going away. But how we use it and what the guardrails are in terms of trust, that’s the next bastion that we need to be considering with GenAI.
It’s clear that there are many benefits to utilizing GenAI but, as with any relatively new technology, it comes with a risk. What are some of the main concerns surrounding generative AI that security leaders should be aware of?
First and foremost, about 24% of GenAI applications are being developed with security in mind, either at the forefront or at least being prioritized. So, the majority are more of the attitude of getting production going, and getting the application to market.
When you’re designing models, the more data that you have, the better the learning is. So, when we’re putting these models together, we’re putting a huge corpus of data—which could be PII or other “crown jewel” content—into a model that could potentially get tapped by an insider threat or a threat actor who’s using credential capability to get in some somehow. So suddenly, I have this corpus of data that’s available to just about anyone who’s using that model. There needs to be some level of safeguarding around that.
To that end, we need more of a risk awareness culture in place. How do you manage and handle that AI pipeline? I would argue that security leaders need not only to be very conscious of that, but have it at the forefront of their development processes.
And going back to trust, if something gets poisoned or injected into that model, but you think you’ve got the controls in place to remediate those issues, people will trust that. If I am counting on the fact that there’s no bias in this model and I believe that I can count on whatever response I’m being given, then I’m going to make decisions in my business accordingly. But if you don’t have those guardrails in place and I can’t count on the fact that there’s a trust factor within that model, it dilutes the whole purpose of why I’m using the model in the first place.
You mentioned earlier the open-source nature of a lot of these tools. Where does the responsibility lie when it comes to securing them?
I believe that it is incumbent upon the application developer—the owners, those that are creating it—and the security protocols that they have in place.
When you’re starting off developing your application and you decide that you’re going to use open-source code, you need to consider the fact that everyone has access to that same code and backdoors can be put into place. There are ways that you can handle and manage the access and entry, and who’s able to leverage that model, but you do have to take the time and potentially the investment to put those programmes or those applications in place.
The first step on that starts with the data, and ensuring that you understand what you’re putting into the model and why you’re putting it into the model, and then putting that protocol in place.
Then you need to be looking at access. What are you doing in terms of discovering who’s involved, what they have access to, and when you should stop or safeguard that access? And then in terms of the controls, you need to think about how you can actually stop an actor or malicious user from injecting into the model.
With that in mind, can you tell us about IBM’s risk-based framework for securing generative AI, and how it can help increase organizations’ trust in their AI tools?
You identify your data, first and foremost, and you know what you need in terms of making those decisions on what is being put into that model.
The next step is protecting the model in and of itself. So, continuously scanning and ensuring that you know how and what people have access to in terms of data, but then also the guardrail of that particular model in and of itself. Scanning on a regular basis will help you in terms of knowing how it’s being used, what’s being used, and then whether you need to do some sort of recalibration of the model.
And then finally, it gets into the usage. The users should be able to trust the model that’s coming in and understand why they’re using it and what it’s all about. When we’re looking at inference and we’re looking at poisoning of models, the question as to who has access and why they’re using these models needs to be clearly defined in terms of developing this programme from the get-go. And oftentimes, we set up these applications and no one really comes back and takes a look at how the data is being used, why it’s being used, and whether it’s doing what we expected or giving the expected responses.
So, we’re protecting the data, protecting the model, and then protecting the usage and the users through open communication and continuously scanning and retrofitting if necessary.
What are your final words of advice to organizations concerned about embracing the use of GenAI in the workplace?
In security, we shouldn’t be the bad guys. Everyone should understand what we’re doing and why we’re doing it, so that security teams don’t have to be the bad guys coming down on the enforcement end.
If you’ve developed the right culture in your organization, it should just flow with ready steam. So, I would suggest focusing on that risk culture and ensuring that you’ve got stakeholders across the different parts of the business who aren’t necessarily steeped in security, but know why you’re using this and how you’re using it. That can make a huge difference in changing that culture.
My second suggestion is to remember that, in terms of security for AI, the threat actors have just as much access to what we’re developing as we do, and how they’re leveraging that same technology is critical. So, as much as you need that internal focus in terms of how you’re developing, we also need to continuously improve on our ability to figure out that they’re inside the infrastructure.
Finally, what are you most excited about in the cybersecurity space as we move further into 2024, and then beyond into 2025?
Security in and of itself is intertwined in everything that we’re doing—or at least it should be. Everything should be considered secure by design. That’s how we’re developing our solutions and software as we’re going forward. The acquisitions that you’re seeing are all based on a secure by design structure, and on rendering clarity and trust as to how and why our data is being produced in the way it is, and so that you can make better decisions in terms of what you want to invest in.
That is becoming more prevalent in the industry, and it’s something that all of my brethren in the other technology companies are working toward.
And it’s going to be particularly important as we get into the quantum era, because that will continue to amplify the type of data that we have access to.
Thank you to Jennifer Kady for taking part in this interview. You can find out more about IBM Security’s AI-driven cybersecurity solutions via their website.
Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions with confidence.
For more interviews with industry experts, visit our podcast page here.