AI agents are multiplying inside the enterprise at a pace security teams can’t match. Every new workflow, integration, copiloting tool, or automated assistant quietly adds another semi-autonomous actor with access to corporate data—and often without an attached identity, policy framework, or monitoring trail.
Live at OpenText World 2025 in Nashville, Expert Insights spoke with Scott Richards, Senior Vice President of Product and Engineering for OpenText’s Cybersecurity Enterprise Group. Richards oversees teams spanning security analytics, application security, data security, identity and access management, and digital forensics, giving him a broad vantage point on how AI is reshaping today’s threat landscape.
In this conversation, he breaks down why AI agents are proliferating faster than enterprises can secure them; how identity, data protection, and behavioral analytics must evolve to keep pace; and why “preemptive security” is becoming essential as autonomous systems introduce entirely new vectors for exploitation.
Could you start by introducing yourself, your role at OpenText, and how your work intersects with emerging AI-driven engineering practices?
My name is Scott Richards and I’m the Senior Vice President of Product and Engineering for our Cyber Security Enterprise Group. We also have a Cyber Security SMB and Consumer Team, but I run the Enterprise Team.
We divide our enterprise cybersecurity group into five pillars: Security analytics, application security, data security, identity and access management, and digital forensics and incident response.
For the second part of your question — I think it doesn’t just intersect with it; it’s almost a complete and total overlap with it. We are extremely focused (as is the industry at large) on not just securing against typical human users or insider threats, but also securing against agents.
In some reports, agents are already outnumbering humans by 50:1, so you can imagine a CISO at an organization who’s stressed about identity management and data access for human identities, now having to multiply that by 50 and figure out what AI agents are going to do.
We are right there at that intersection. Everyone we talk to is concerned about their data, how agents are accessing that data, and how we are going to attach identities to those agents so that we can apply policies and manage access, etc.
You delivered a keynote yesterday all about AI agents becoming the new attack surface for businesses. For our readers, what were the key insights or takeaways that you covered?
Agents are proliferating and there’s nothing we can do about it. We could potentially just say we’re going to outlaw or ban agents from our organization, but as I mentioned in my keynote, that would be a little bit like saying, “Hey, flying has become riskier, so we’re just going to ground airplanes and we’re never going to fly again.” Clearly, that’s not the solution. You would be at a competitive disadvantage immediately if you chose to do that, because these agents are extremely helpful for doing a lot of different things.
The question is, how do we make sure that we are safe with these agents? And there are several steps to take. What’s critical is that with these agents, we have to really focus on pre-emptive security.
The world we used to live in — where we could just watch for known attack patterns and then alert someone when that pattern was recognize —I s gone. It is still a valid security response, but now we must do more — specifically at the front end of the attack sequence—so these attacks don’t happen.
We need to use very strong AI-powered application security solutions to look through the code and find vulnerabilities in these agents—either in the agents themselves or in the applications they develop—to make sure we’re filling gaps where things like prompt injection can be inserted and where excessive agency can be exploited. That’s the first step in making sure those agents are secure.
Then we need to make sure we are locking down our data. The data is what these bad actors want; they either want to block our access to it or gain access for themselves and exploit it. It’s important to make sure that you have a solution that goes through the entire ecosystem and identifies PII, PHI, and any other sensitive information—not only to take measures to protect it through encryption, etc., but also to report it into other security solutions.
Because this is a team sport, right? There is no single silver bullet that’s going to solve this problem. We have to make sure that our data security solution is reporting to our threat detection and response solution when there is sensitive information in a file or repository, so that the solution can pay closer attention to it.
We do some interesting things in threat detection where we monitor human behavior to figure out the “unique normal.” The unique normal differs by person, and for me it’s that I work from Utah and usually work between 6 a.m. and 7 p.m. So, if all of a sudden I’m doing something from Africa at 2 a.m., there’s probably something wrong there.
And we can do the same thing for agents by watching and baselining their timestamps—when they work, what they access, how often they access certain files. When that behavior becomes atypical, we can immediately alert and notify before damage is done. That then feeds into identity and access management, where we can lock down their access to certain folders, files, etc.
This is a continuous loop of dynamic security that evolves, and two things are important there. One, it must evolve and change because threats are dynamic. And two, it must be AI-powered. There is just no world anymore where humans can keep up with the advanced threats being thrown at us without doing it at machine scale. We’ve got to be able to sort through billions of events, correlate them, identify what represents a risk, and respond appropriately.
When you look across the enterprise, what specific AI-driven risks should organizations be most concerned about right now—is it insecure code generation, data exposure, workflow hijacking, or something else?
Yeah, those three are great, and I would identify a couple of others as well.
The world of agent and agent proliferation is related to those that you just listed. But we’re in this unique new position where we have to decide how we’re going to manage these agents because it’s the Wild West right now. People are building their own agents.
We have announced our Aviator studio, that allows people to create Aviators, which are agents. We’re doing it in a safe and secure, organized, orchestrated way, which is what we need to do.
But when you think about agents, you’ve got the digital assistant which does your bidding and is relatively easy to control, then you’ve got the digital worker which is a separate entity all by itself. It’s autonomous. It’s doing what it wants. We can assign it an identity and make sure that it stays inside the scope we intended.
Where it gets scary is when you talk about digital twins, right? That’s my identity, but it’s working autonomously, hopefully doing what I want it to do, but there’s no guarantee it’s not going to go out scope or fail to interpret what I want it to do.
We’ve got to, as an industry, figure out how we are going to identify these. How are we going to assign identities? How are we going to manage their access? How are we going to monitor what they’ve touched?
And we’re well positioned at OpenText to do that. We’ve been doing identity and access management for two decades, but we have to evolve that to also now monitor agent behavior, which is really interesting and exciting.
From an engineering leadership perspective, how do you balance enabling AI innovation with enforcing these strict security controls?
That an issue that every industry, every engineering organization, is battling with today. You can’t be last in this game or your engineering teams will be frustrated and leave because they want to go to more innovative, more cutting-edge organizations that allow the use of technology.
We are a massive organization with thousands and thousands of developers. So, we need to have some controls in place, some guidance in how that technology is used. Our developers are using our own tools for AI powered development initiatives; for instance.
That’s deployed across all of OpenText. We use our Aviators to look through code and find gaps that might allow for things like prompt injection.
We also use our Aviators to actually provide automatic code suggestions. So, our developers can literally just click a button and replace the vulnerable code. That’s a massive time saver.
We’re also leveraging tools from partners, including [Microsoft] Coilot and other development tools. We’re deploying and becoming more active every day, but we’re taking a measured, organized approach as opposed to just letting everyone go rogue and do whatever they want. That’s not a conformable development environment for anyone.
Looking ahead to 2026 what advice would you give to security leaders on risks to prioritize in terms of AI security, or beyond?
It’s about the data, so my number one piece of advice is to make sure your data is secure and ready for this proliferation of agents. Because it’s coming. If you don’t think it is, then it’s probably happening behind your back. There are going to be agents in your environment.
You need to make sure that every corner of your content and of your data repositories is secure. And that means scanning those environments, identifying where they’re sensitive data, and taking action, whether that means redacting that data, masking that data, relocating that data, or encrypting that data.
Fortunately, we have solutions that do all those things. When a bad actor accesses the data, they’re either getting the false information that you want them to get, or they just don’t have access at all.
Number two, make sure that all agents have an identity and you’re managing them and monitoring them, s you would any human. And number three, make sure that you are monitoring the behavior of those agents so that you know when they are going out bounds.
We don’t need to be afraid of leveraging agents. We need to be organized. We need to take the precautions necessary to safely deploy these agents, in an organized manner.