AI Security Solutions

“Agentic AI”: An Autonomous Future For Cybersecurity, Or Just Another Buzzword?

Last updated on May 8, 2025
Joel Witts
Written by Joel Witts

Imagine a world where AI-powered security agents can execute tasks, hunt threats, and learn from attempted breaches, all without any human intervention. That’s the promise of so-called “agentic AI”, which is by far the biggest buzzword in the cybersecurity space today. 

It’s fair to say there’s a healthy dose of scepticism in the market as to whether this can be achieved, and what the timescales may look like. Is agentic AI really a transformative tool, or just more marketing jargon? Nicole Bucala, the CEO of DataBee, put it well to me recently: “AI is such a buzzword and… Agentic AI has now become the buzziest of the buzzwords.”

There is clearly a huge demand for more automation in the security space and agentic AI can be transformative for security teams. This is not a hypothetical statement; there are already several tools on the market bringing automative threat hunting and triage abilities. 

There’s some debate as to whether these tools are really agentic AI (or, indeed, what the phrase agentic AI really means), but it’s clear that vendors are moving at speed to implementing agentic capabilities within their solution portfolios. “AI will be embedded in every aspect of the investigation and the response lifecycle,” Chas Clawson, Field CTO at Sumo Logic, told Expert Insights at RSAC 2025.

In this report, my objective is to outline what agentic AI is, how it’s reshaping cybersecurity, and whether agentic AI can deliver real security insights. Drawing on insights from industry leaders, we’ll unpack its potential and where the industry may be headed.

Defining Agentic AI And Its Context In Cybersecurity

Agentic AI refers to autonomous systems that perceive, reason, and act to achieve goals without constant human input. Built on large language models (LLMs), reinforcement learning, and behavioral analytics, agentic AI excels in dynamic environments. In cybersecurity, it operates across:

  • Threat detection: Identifying anomalies in networks, endpoints, and clouds.
  • Automated response: Blocking threats or quarantining devices.
  • Proactive hunting: Simulating attacker behavior to find vulnerabilities.
  • Secure app development: Bolstering SAST & DAST with vulnerability remediation and triage.

“From a security standpoint, we’re able to leverage LLMs and massive data to help improve the performance of cyber,” said Patrick Joyce, Global Resident CISO at Proofpoint, at RSAC 2025, emphasizing agentic AI’s ability to enhance visibility and context—threat intent, user roles, and compliance needs.

What Can Agentic AI Do For Security Teams?

A robust agentic AI program includes:

  • Threat discovery, classification, and inventory: Real-time anomaly detection, behavioral analysis, and threat cataloging.
  • Threat protection: Automated responses, DLP, and encryption.
  • Security monitoring and response: SIEM integration, real-time alerts, and behavioral analytics.
  • Threat deletion, backup, and recovery: Secure threat neutralization, data restoration, and forensic analysis.

These tackle data sprawl, insider threats, and compliance. RSAC 2025 vendors stressed human oversight.

Agentic AI: The Next Era Of Cybersecurity?

Cybersecurity has evolved through distinct eras, each addressing new attack surfaces:

  • Perimeter security (1990s): Firewalls from Check Point and Juniper Networks secured enterprise boundaries.
  • Network security (2000s): Palo Alto Networks and Zscaler introduced advanced firewalls and secure gateways.
  • Endpoint security (2010s): CrowdStrike and Trend Micro tackled device-level threats.
  • Cloud security (2020s): Wiz and Orca Security emerged to protect cloud environments.
  • Agentic AI security (2025?): Why hasn’t a billion-dollar agentic AI platform risen yet?

Following the layered security approach—a framework many CISOs use to assess solutions—agentic AI builds on these foundations to address today’s complex threats. With data sprawling across clouds, SaaS, and endpoints, and attackers leveraging AI themselves, agentic AI’s autonomous capabilities are the logical next step in improving defense workflows.

Is Agentic AI A New Cybersecurity Category?

The rise of agentic AI sparks debate about its place in the cybersecurity landscape: is it a distinct category like SIEM or XDR, an evolution of existing tools, or simply a means to better outcomes? Three perspectives emerge:

  • A new security layer: Some argue agentic AI constitutes a new category, adding a layer of autonomous threat management atop traditional tools. Its ability to reason, act, and adapt in real time—beyond the correlation-based limits of SIEM or XDR—sets it apart.
  • An evolution of security monitoring/SIEM: Others see agentic AI as an evolution of security monitoring, enhancing SIEM and XDR with advanced analytics and automation. Rather than a standalone category, it upgrades existing platforms by integrating LLMs and behavioral analytics.
  • Outcome-focused for users: For many practitioners, the categorization debate is secondary to outcomes. Security teams prioritize measurable improvements—faster response, fewer false positives, better compliance—over taxonomic labels. Users will care about results, not whether agentic AI is a new layer or an upgrade. 

The truth likely lies in a hybrid view: agentic AI introduces novel capabilities that could define a new layer, while building on SIEM and XDR foundations. Yet, for users, the priority is clear—deliver security outcomes, not debates over terminology. As the technology matures, its classification may become clearer, but its value hinges on practical impact.

“AI will be the most powerful technology of our lifetime,” said Jen Easterly, former CISA Director, at RSAC 2025, emphasizing its potential to “detect attacks before they occur” while warning that “AI that can protect, can attack.”

The Case For A Holistic Agentic AI Platform

Today’s cybersecurity landscape is a battleground of scale and speed. Data sprawl across hybrid clouds and thousands of SaaS apps creates blind spots. Sophisticated attacks, like autonomous malware, exploit these gaps faster than humans can respond. Security operations centers (SOCs) are overwhelmed, facing thousands of alerts daily. 

Agentic AI offers a solution: autonomous systems that reason, decide, and act in real time. By integrating threat detection, response, hunting, and scalability, a holistic agentic AI-powered platform can protect modern infrastructures where data is the new perimeter. 

“We’re starting to see the need for autonomous action more and more,” noted Nicole Carignan, SVP of Security & AI Strategy at Darktrace, at RSAC 2025, because “it’s not a human versus AI issue; it’s the need for AI to defend against things we don’t even know exist.” 

Several forces are pushing agentic AI to the forefront:

  • Rising breaches: The increasing cost of data breaches highlights the need for better security controls.
  • Cyber resilience: Over 50% of firms lose sensitive data each year, driving demand for rapid recovery.
  • Increasing adversary AI use: “The AI generated threat will be big,” warned Rachel Jin, CTO at Trend Micro, at RSAC 2025, noting attackers’ use of AI for targeted phishing and malware.
  • Data sprawl: Companies use hundreds of SaaS apps, creating untracked data flows that autonomous security agents can help plug.
  • Compliance pressures: GDPR, CCPA, and emerging laws impose hefty fines if data breaches occur. AI agents could help teams to reduce these pressures.

“The adversary is going to figure out new ways to use AI. And it’s going to help them scale their operations and it’s going to make them better. And I think that we are, whether we like it or not, now in an arms race with them,” John Hultquist, Chief Analyst at Google Threat Intelligence Group told Expert Insights. But, he adds, “AI could be the solution that we’ve been looking for.”

Surge In Agentic AI Investment And Innovation

Unsurprisingly, the cybersecurity industry is pouring investment into agentic AI with several large vendors releasing agentic AI security platforms, and many start-ups emerging. Below is a detailed breakdown of some key players in the space and their offerings:

Market Leaders:

  • Microsoft (Security Copilot): Unveiling new AI agents in March 2025, Microsoft’s Security Copilot automates tasks like phishing response and Windows settings modifications, integrating with Microsoft 365 and Azure. Its agents use unique identities for secure access control, promising to reduce manual workloads and false positives. 
  • Google Cloud Security (Agentic SOC): Google Cloud recently outlined their vision for an agentic SOC, powered by Google’s Gemini LLM. This involves autonomous agents for alert triage and malware analysis. Capabilities include reverse-engineering suspicious files, dynamic investigations and audit logs, and streamlined Tier 1 and Tier 2 analyst workflows.
  • CrowdStrike (Agentic AI Reponse and Charlotte AI Agentic Workflows): Crowdstrike introduced Charlotte AI Agentic Response and Workflows for Crowdstrike Falcon in April 2025, enabling autonomous investigation and response across endpoints, cloud, and identities. Charlotte AI Triage for Identity Protection aims to triage identity-based attacks and reduce analyst workloads.
  • Trend Micro (Cybertron)Trend Cybertron is cybersecurity-focused LLM that can continually learn from threat data. It’s fully integrated into Trend’s Vision One platform to provide proactive detection and response. The platform is designed to accelerate the development of autonomous security agents, and Trend Micro made the platform open source in March 2025.
  • IBM (Autonomous Threat Operations Machine – ATOM): Launched at RSAC 2025, IBM’s ATOM uses agentic AI for autonomous security operations and predictive threat intelligence, minimizing human intervention. Its X-Force Predictive Threat Intelligence agent leverages industry-specific AI models to predict threats and minimize manual threat hunting.

Cloud-Native Leaders:

  • SentinelOne (Purple AI): The Purple AI Athena release, debuted at RSAC 2025, mimics SOC analyst reasoning with auto-triage and hyperautomation, integrating third-party data sources. The platform is built on deep security reasoning, full-loop workflows with automation and response, and data-source agnostic integrations across the SOC. 
  • Darktrace: Known for its AI-driven threat detection, Darktrace’s agentic platform autonomously identifies and responds to anomalies across cloud, SaaS, and on-premises systems, without solely relying on generative AI technologies.
  • Checkmarx (AI-Powered Multi-Agent Platform): In early access currently, Checkmarx’s platform automates secure code development, using multiple AI agents to scan codebases and suggest fixes. It aims to reduce vulnerabilities in tested applications.
  • 1Password: Password manager provider 1Password recently announced new agentic AI capabilities that help to secure and govern identities and credentials. These can automatically apply access controls and provide visibility into non-human identities.
  • Cycode: Application security posture management (ASPM) provider Cycode have released ‘AI Teammates’ that can tap into risk graphs, monitor code changes, monitor code scan results and even propose code fixes.

Emerging Startups:

  • Terra Security ($8M): Backed by $8M in funding, Terra Security aims to automate penetration testing with agentic AI, simulating attacker behavior to identify vulnerabilities. 
  • Kenzo Security ($4.5M): Kenzo Security’s Agentic Security Platform, funded with $4.5M, targets autonomous threat detection in hybrid environments. It aims to automatically detect threats and conduct security investigations.

Challenges And Blockers To GenAI Adoption

Despite this investment, adopting agentic AI in cybersecurity faces significant hurdles, from practical constraints to trust issues, which temper its transformative promise:

  • Cost of tools: Agentic AI tools and LLMs are expensive, primarily targeting enterprises with deep budgets. High licensing fees, infrastructure costs, and ongoing maintenance make them inaccessible for many small and mid-sized businesses today. 
  • Lack of data: Agentic AI requires robust, accurate datasets to function effectively, but CISOs lack a “magical data feed” to plug into these tools. Many organizations struggle with fragmented data across siloed systems—cloud, SaaS, and on-premises—making it difficult to train AI models. 
  • Vendor lock-in: Current agentic AI solutions are tightly integrated with specific platforms, creating issues with vendor lock-in. Few standalone tools exist that can map across diverse infrastructures, limiting flexibility.
  • Trust and compliance: Security teams remain wary of agentic AI, particularly in high-stakes environments with strict compliance requirements (e.g., GDPR, CCPA). Issues like AI hallucinations—where models generate inaccurate outputs—persist, undermining confidence. 

At a recent panel hosted by the Coalition for Secure AI (CoSAI), Jason Clinton, CISO at Anthropic, presented a solution to the issue of trust in the form of user-defined risk tolerances and controlled deployment. “Agents will operate on the territory as they’re given access. I can set some guardrails… literally my preference for what tolerance risk I have.”

Thoughts & AI Opportunities

Agentic AI now is like Schrodinger’s Cat; it’s both a marketing buzzword and a technology with the potential to transform much of the way cybersecurity is done today.

Leaders like Microsoft, CrowdStrike, and SentinelOne are showcasing its potential capabilities, including automating tasks and enhancing response times, but today it’s still important to have guardrails and keep the human in-the-loop at all stages.

“We never advise a customer to just turn on the automation and good luck,” SentinelOne’s President of Product, Technology, and Operations Ric Smith told me. “It’s like jumping into a driverless car and ending up having to call support to get out. The reality of it is that you want to gain that trust. There’s always human in-loop in the initial steps. And that’s what we advise customers to do until they gain trust that the system’s actually doing what they believe it’s doing.” 

Data quality is also pivotal. Ultimately, businesses with robust datasets and budgets hold the advantage, as AI’s effectiveness hinges on quality training data provided by the organization. There is a danger that this could leave small and mid-sized firms lagging in a growing industry divergence, where larger companies get much better security controls.

And will we ever reach a fully autonomous future? Only time will tell, most experts say.  “We’re just seeing with AI in general, the amount of changes happening on a daily basis. The technology is constantly changing,” Cycode’s Amir Kazemi tells Expert Insights. “I think there could be a world where, yeah, this does happen in a more autonomous way. But I think for right now, the vision we have is more of a human and AI collaboration.”

Changes are currently happening so quickly, predictions generally should be avoided. But the most likely future seems to be one where the SOC team still existis, aided by agentic AI tools. “You’ll still need people in your SOC, but you’ll have agent-type automations in place that can do things much faster and better,” Proofpoint’s Global CISO Patrick Joyce tells Expert Insights. 

That’s a future we hope can become a reality.


For more reports like this, subscribe to Expert Insights. Get our latest analysis directly in your inbox.

This field is for validation purposes and should be left unchanged.

Written By Written By
Joel Witts
Joel Witts Content Director

Joel Witts is the Content Director at Expert Insights, meaning he oversees all articles published and topics covered. He is an experienced journalist and writer, specialising in identity and access management, Zero Trust, cloud business technologies, and cybersecurity. Joel is a co-host of the Expert Insights Podcast and conducts regular interviews with leading B2B tech industry experts, including directors at Microsoft and Google. Joel holds a First Class Honours degree in Journalism from Cardiff University.