Red Canary: AI Is Helping Adversaries Exploit Blind Spots Before We Can Fix Them

Red Canary VP of Product Marketing Aaron Landgraf discusses the findings of the 2026 Threat Detection Report

Last updated on Mar 25, 2026 7 Minutes To Read
Joel Witts Written by Joel Witts
Red Canary: AI Is Helping Adversaries Exploit Blind Spots Before We Can Fix Them

The cybersecurity hype cycle makes it feel like everything is changing. The reality is more nuanced. According to Red Canary’s 2026 Threat Detection Report, adversaries are not reinventing the wheel. They are using AI to do the same things faster: compromising identities, exploiting blind spots, and social engineering their way into organizations at scale.

At RSAC 2026, Expert Insights sat down with Aaron Landgraf, VP of Product Marketing at Red Canary, which was acquired by Zscaler in 2025. In this Q&A, Landgraf discusses the key findings of the 2026 Threat Detection Report, why identity-based attacks have surged to the top, how AI is changing the attacker-defender dynamic, and why the fundamentals of good security hygiene still matter more than ever.

Q. What are the key themes for Red Canary here at RSA this year?

I think for us, the world may feel different, but when you look at the data that shows up in the Threat Detection Report, it’s not all that different. AI is being used as a tool by adversaries to accelerate and automate what they’ve been doing historically.

We have capabilities that allow customers to detect some of those behaviors that have been consistent across 11 years of our research. It may feel like the world is undergoing massive change, but the reality is we actually have the capabilities to solve some of this already in market.

Q. The 2026 Threat Detection Report came out last week. What were the main takeaways?

The big one for us is the surge in identity-based attacks. Cloud account compromise saw a massive surge this year in terms of what we detected. As a baseline, we often sit at the very early stages of an attack lifecycle. So, we are seeing things before they may show up as an alert in one of your endpoint systems.

What’s so important about cloud account compromise is that it’s nuanced. It’s not always a bad thing when you see ‘Bob’ from your team logging in from the Philippines with a new VPN. It actually requires a bit of business context to understand if that is something the SOC needs to take action on. 

Even two or three years ago, the amount of swivel-chairing that was required to go from “there’s this human doing this thing on this device in a funky way” to “how do we check that it’s actually ‘Bob’ in the Philippines” used to take forever. AI is helping us go from insight to action at incredible speeds now.

Q. Is identity now the main battlefield? Is this a major shift from where things were?

We grew up on the endpoint. We have seen historically attackers trying to take advantage of vulnerabilities on the endpoint. What they are discovering, and it shows up in our data, is that it’s easier to log in to the network with valid credentials. If you apply the tools of social engineering, which AI helps adversaries do at scale, it’s just a lower-friction way to get access to a company’s crown jewels. That’s why it shows up so prominently in our data.

Q. Can you give us a bit of background on Red Canary and how you fit into the broader security ecosystem, particularly now as part of Zscaler?

Red Canary got its start by helping companies who didn’t have access to Fortune 100-level security operations take advantage of that capability. MDR wasn’t even really a category when the company was formed. We came in when companies had controls like an EDR, like Carbon Black, and they were telling us “We have all this data, all these alerts, all this noise coming out of our EDR systems. We just don’t know how to operationalize it to better protect our business.” That’s the problem Red Canary solved originally.

It continues to be the primary type of problem we solve: customers deploy controls in their environment, those controls create a lot of signal, and customers have a tough time figuring out what signal to focus on and what action to take. That mission now extends within Zscaler as well, looking across endpoint data, identity data, cloud data, and now zero trust data too.

Q. The report highlights threats to AI infrastructure as a major trend. How real are these threats today?

We’re seeing creativity from adversaries trying to figure out how to use agent capabilities that are exposed by the customer to get access to the crown jewels. I don’t think they are being deployed massively yet. But knowing that adversaries tend to travel the path of least resistance and also knowing that most organizations today are still in very early stages of getting visibility into how AI is deployed across their organization, I think it’s reasonable to expect that this emerging trend will continue.

It’s about adversaries probing where companies don’t quite have visibility yet over their AI deployments and finding the gaps. Ten years ago, the conversation was around shadow IT. I think it’s reasonable to assume there’s a ton of shadow AI happening within organizations. If you don’t know how these tools are being deployed, if you don’t know what data AI agents have access to, how can you possibly protect it?

Q. For clients coming to Red Canary and Zscaler with these problems, how are you helping to address them?

In two ways. Zscaler has a number of AI security products that we have introduced to establish visibility and better manage risk related to how organizations are using AI. Where Red Canary plugs in is helping organizations match the speed of the adversary with speed for the defenders.

This isn’t a new pattern. Five or ten years ago, adversaries were starting to automate with code, so we stood up automated capabilities to detect and respond with code. Those were the SOAR platforms. The next iteration is: how do we take advantage of AI agents and develop agentic workflows that can detect and respond at machine speed? You can’t have humans trying to fight AI threats at the scale and speed that’s required. It’s too fast.

Even for Red Canary, and you can think about us as having one of the most sophisticated security operations teams in the world because we protect over a thousand different organizations with all sorts of different controls across all sorts of different sizes and industries, we have already identified ways to improve the efficiency and capability of our experts through AI. 

It’s because of their ability to take multiple disparate data sources and make sense of them at machine speed. Even a couple of years ago, it either required you to write so much code and process so much data that it was too expensive, or by the time the insight showed up in front of the analyst, the adversary had already moved on. Matching speed with speed is starting to feel like a must-have.

Q. How have you taken the insights from the Threat Detection Report and used them to help your customers and the broader community?

We write this report not just for our customers but for the broader security operations community. It is full of recommendations and best practices for how to protect yourself, not only against AI-powered attacks but also against the things we see over and over again. Adversaries are using the tools that are commonly deployed across every enterprise to deploy the same techniques they’ve been using for the last five to seven years. 

They’re just doing them faster with AI. That continues to be our drumbeat: the world may feel like it’s changing, but the nuance is that it’s just moving faster, and we have the ability to protect ourselves, often by just making sure you have your systems patched or that you are taking advantage of emerging capabilities to identify and manage exposures.

We even go so far as to give customers and the community the detection analytics that we’ve written. If you have the wherewithal to take our detection logic and plug it into your engine, then great. We don’t want to keep that to ourselves.

Q. Is there anything in the report that people might have missed, or a trend below the headlines that deserves more attention?

When we talked about identity compromise and AI-powered adversaries, I think there’s an interesting trend around paste and run, which showed up in the report. It’s an advanced form of social engineering where users are being coerced into copying a command and pasting it into their command line, which basically takes over their machine. That one in particular, the only way out of it feels like enablement and awareness training for employees, for your parents, for whoever is going to be a potential victim.

The good news about AI for our team in particular is that because you can now leverage agentic workflows to do some of the more repetitive work of the SOC, we actually have our team starting to step out and take on some of these more strategic initiatives. I expect the same will be true of the security operations teams that we service.

Q. As a final question, what is the one threat you think will define the next 12 months?

I think we’re going to be talking a lot about AI, but social engineering is going to become more and more sophisticated. Our ability to match that sophistication is going to be paramount.

Written By Written By
Joel Witts
Joel Witts Content Director

Joel is the Director of Content and a co-founder at Expert Insights; a rapidly growing media company focussed on covering cybersecurity solutions.

He’s an experienced journalist and editor with 8 years’ experience covering the cybersecurity space. He’s reviewed hundreds of cybersecurity solutions, interviewed hundreds of industry experts and produced dozens of industry reports read by thousands of CISOs and security professionals in topics like IAM, MFA, zero trust, email security, DevSecOps and more.

He also hosts the Expert Insights Podcast and co-writes the weekly newsletter, Decrypted. Joel is driven to share his team’s expertise with cybersecurity leaders to help them create more secure business foundations.