AI Agents Are the Next Big Attack Surface, OpenText Warns

If your LLM is making its own decision or creating code independently it’s no longer just a tool - it’s a user.

Published on Nov 26, 2025
Mirren McDade Written by Mirren McDade
AI Agents: The New Attack Surface

This year’s OpenText World conference in Nashville brought together leading technology executives, IT professionals, and industry experts to explore the cutting-edge developments shaping enterprise software and digital transformation. The event featured in-depth sessions on AI-driven automation, data management strategies, cybersecurity challenges, and emerging trends that are redefining how businesses operate and innovate.

In a keynote led by Marcus Hearne, Senior Director of Product Marketing, and Scott Richard, SVP Product/Engineering, the speakers touched upon the emergence of AI Agents as a new attack surface.

What Is Agentic AI? Why Use It?

Agentic AI refers to artificial intelligence systems that can autonomously make decisions, plan, and take action to achieve goals with minimal human intervention. Instead of simply reacting to inputs or performing predefined tasks, agentic AI can establish objectives, create plans, and execute tasks with minimal oversight. As this technology evolves, it is poised to reshape multiple sectors by streamlining intricate operations and improving the efficiency of everyday workflows.

AI Agents are quickly becoming the “next big thing” according to Gartner, who also predict that 33% of enterprise software applications will include agentic AI by 2028, a significant jump from only 1% in 2024. They also anticipate that around 15% of day-to-day work decision making will be made autonomously by AI agents.

Gen AI is poised to increase productivity and provide real, tangible value to organizations that implement it. An MIT study shows this productivity boost sits at an average of 15% with access to an AI assistant.

So, the value is real, and we can observe it. Productivity can improve without the dreaded decrease in quality. So, what are the risks?

What Are the Risks Of Gen AI?

The rapid adoption of any tool can have potentially devastating consequences. A tool with the ability to act autonomously, change processes or change data comes with some obvious potential pitfalls. However, with the right precautions in place, safe deployment of these agents is possible.

So, what is the solution here? Clearly it is not to avoid deploying AI agents at all, as this is simply not an option for those looking to stay competitive in their respective markets. Organizations that do not embrace the expansion and adoption of AI risk falling behind or even leaving themselves unnecessarily exposed. Instead, controls must be put in place for safer deployment.

The 5 Steps To Secure AI Agents

By taking these five steps to ensure these agents can be securely deployed, organizations can avoid the potential pitfalls of this emerging technology.

First Step: Identify And Protect Sensitive Data

Organizations must create a secure environment and identify all sensitive information in their ecosystem. Bad actors target data either for theft or extortion, so continuous ecosystem scanning (using AI-powered tools to handle scale) is essential. Identifying and securing sensitive data also allows it to be fed into other systems to advance the security cycle.

Second Step: Secure Identities

Agents have identities too, and Non-Human Identities (NHIs) should be treated like human ones. Each must have a unique identity, be governed by clear policies, use zero-trust authentication, and maintain an audit trail of access. The goal is to give agents only what they need, when they need it, preventing excessive agency creep.

Third Step: Monitor Behavior

Agents’ actions should be continually monitored. Establishing a baseline of normal behavior makes it easier to spot atypical activity and indicators of compromise. This enables faster, more effective remediation. Integration with SIEM, SOAR, IAM, and other systems supports proactive and preemptive action.

Fourth Step: Secure Your Apps

Application security underpins agent security. Agentic applications must be designed to prevent new attack vectors, including prompt injection, excessive agency, and sensitive data leaks. AI-augmented security testing can detect these threats and enable secure coding at scale.

Fifth Step: Be Ready to Respond

Breaches are often a matter of when, not if. AI-driven attacks move faster than manual investigations, so automated forensics and root-cause analysis are critical for rapid recovery and resilience. Organizations must move from alerts to insight to action quickly, using a unified view that correlates all artifacts, logs, and traces into a single pane of glass for efficient response.

To Conclude

There is no silver bullet approach to securing AI agents, and anyone who tries to tell you they have one is being misleading. Without sufficient guardrails, any advanced technology has the potential to open up new avenues for risk and possible exploitation.

The true answer to this issue is to weave together a strong fabric of solutions that interconnect, are powered by AI, and can use machine scale and work to create a feedback loop that evolves as threats evolve.


Read More: