Google Cloud’s Anton Chuvakin: AI Will Favor Defenders in the Long Run

Google Cloud's Dr. Anton Chuvakin explains how shadow AI governance has evolved, the human-in-the-loop debate, and why in the long run, AI will ultimately favor defenders 

Last updated on Mar 26, 2026 13 Minutes To Read
Joel Witts Written by Joel Witts
Google Cloud’s Anton Chuvakin: AI Will Favor Defenders in the Long Run

For the past two years, the dominant AI conversation in cybersecurity has been about applying AI to security: AI-powered SOCs, AI-driven threat detection, AI copilots for analysts. But at RSAC 2026, the conversation has shifted. 

Securing AI, and specifically securing agentic AI, was the central theme of the conference this year. The arrival of AI agents that act at an insane speed and scale has forced CISOs to confront a new category of risk that traditional security controls were never designed to handle.

Dr. Anton Chuvakin has had a front-row seat to this evolution. A former Gartner analyst credited with coining the term “EDR,” Chuvakin now works within the office of the CISO at Google Cloud, where his team helps enterprise customers navigate the security challenges of cloud and AI adoption. 

He has spent several years watching the AI governance conversation evolve from basic questions about consumer-grade versus enterprise-grade tools to the far more complex challenge of governing autonomous agents with access to enterprise systems.

Expert Insights spoke to Dr. Chuvakin at RSAC 2026 to discuss how he’s advising CISOs on securing agentic AI, and why he believes AI will ultimately favor defenders in the long run.

Q. Can you give us an introduction to yourself and your role at Google Cloud?

I work for the CISO, which is an org that ultimately reports to the CISO of Google Cloud. And the organization was built by Phil Venables, who was the CISO. The purpose is mostly helping customers have a secure experience in the cloud, with AI, with Google technologies. We’re not part of any kind of sales channel. We obviously help customers mostly. We help prospects as well in some cases. 

And we also do a fair bit of internal advisory, evangelism, a little bit of research. And before that I was a Gartner analyst. I spent eight years at Gartner, dealing mostly with things like security operations, detection and response, a little bit of threat intel, and quite a few other areas. During eight years you tend to cover a fair bit of security.

Q. What are you seeing as the major themes at RSA this year?

I of course feel the AI in the air, that’s pretty obvious. But my impression that I started to develop is that securing AI has finally won over AI for security. Last year I felt like there were a lot of vendors who said, oh yeah, we’re going to apply AI for SOC, we’re going to apply AI to do this, we’re going to use AI over here, and it may not work, we’ll see. But when I looked at my last year’s RSA recap, I said, I don’t see enough securing AI. It’s really interesting that AI for security is everywhere, but securing AI is in a few corners.

I feel like this year, securing agents, agent identity, securing AI is there in force. And I feel like maybe the agents did it. It’s the fact that they showed up in large numbers and people said, yeah, well, chatbots, we could have dealt with them using traditional technologies. But with agents we can’t, and now we have to bring identity, and then data security, and then governance. So my suspicion is that it would be something to do with securing agents as a big theme.

Q. You’ve written about securing agentic AI and agentic AI governance going back a couple of years now. How has that conversation evolved, and how are CISOs thinking about it today?

Much of our interactions were connected to the topic of shadow AI, which is like a play on shadow IT, the ungoverned IT. We’ve done some studies that compare the adoption of cloud 10 years ago and the adoption of AI, and there are some parallels. But we also noticed that the speed of adoption is much higher. While an organization can essentially shy away from cloud or only pursue cloud in a governed way, with an occasional software-as-a-service tool here and there, with AI, employees are there first.

I’ve spoken to clients from regions that are maybe not as technologically fast as the US, and they say, oh yeah, we are not planning to adopt anytime soon. And I say, but what about your employees? They probably all use consumer-grade AI tools. And they said, oh really, you think that? And I’m like, yeah, I’m pretty sure, because most studies say that even for companies that don’t use AI, the percentage of employees using AI can be in the 70s, which of course is quite shocking. But in the cloud, it wasn’t like that.

The initial topic for us was always this: consumer-grade AI, the vendor, the AI provider may learn from your prompts, AI gets better, everybody benefits from that. But this is enterprise-grade. You can, or you will by default, not have the vendor learn from that. If you use confidential data, it doesn’t sneak into the model. So consumer-grade versus enterprise-grade was a big topic for 2023. And really interestingly, there are still people confused about consumer-grade versus enterprise today. So even that 2023 topic is still hot in some circles.

And then people started to say, we’re just going to block it. Remember there was this news a couple of years ago when some country said, we’re just going to block GPT on a country firewall. And it was kind of hilarious, but of course some companies do that today. And of course every mobile device now has AI. An employee can whip out his device, aim the camera at the screen and say, hey Gemini, help me improve this presentation. I’ve actually demoed it during one of the presentations about AI governance. So it started about telling people not to ban. Because if they ban, they push it deeper underground, have less control.

And then the discussion shifted to the path. How do you do this guided evolution? Hey, you want to use AI for work? Okay, what’s the use case? Use case is still at the center. Somebody says, I want to have this public data and I want to do some marketing copies with public data. Honestly, if you use consumer-grade AI, it’s not good, but nobody’s going to die. Now, I’m writing a driver for my company’s hardware. 

Can I please upload this code to consumer-grade AI? That’s not. This is probably a corporate secret. You don’t want this to become part of a knowledge base. So that’s enterprise-grade only. It became more about the use case. This use case, low risk, go over here. This is medium risk. Probably still okay to use enterprise-grade. But this is super high risk. In some cases maybe you don’t use AI, or you do something on-prem or an open-source model in your data center for very sensitive use cases. It always became about risk-ranking these use cases.

I’ve met one company where a CEO was a big friend of AI adoption, and the CIO was in fear of AI. And the CISO reported to the CIO. So his boss said, don’t use AI, and his boss’s boss said, no, you must use AI. I have no idea how the guy kept his sanity. But it’s a quintessential example of things being torn. And governance becomes a centerpiece without forcing anybody’s hand. Because if you force the hand, you push it underground, and you never know what they do.

So where we are today, people have agents. And that means that some of the data leak, data theft, data distribution in an unauthorized manner becomes more of an action. One of the startups at yesterday’s Innovation Sandbox basically said, chatbots talk, but agents do. 

And this is almost exactly the quote from my presentation at RSAC, where I say that agents act, and that’s why you need to think about how to secure that, and not just secure movement of data. There would be a lot of really interesting governance discussions as a result. You govern outputs, you govern what tools the machine can use, you govern the circumstances. Google has a couple of pretty deep technical papers about how we approach it with the agents we built. But it’s constant education and constant discussion with customers.

Q. Do you think CISOs are generally on board with AI, or is there resistance and fear?

This is actually a really fun question. I’ve seen surveys, and I do remember that there’s one that says, let’s say, 75% of CISOs are afraid of AI and are constraining AI progress. And then another survey says, 70% of CISOs adopt AI-powered security tools. And at some point I shared it on LinkedIn and said, how can both be true? So you can be a resistor of AI progress and also an early adopter.

But I think it’s actually both true because every traditional security vendor, think firewalls, antivirus, probably uses AI somewhere. So in essence, if you as a CISO use that tool, you use AI. It may be embedded, it may be auxiliary to the main mission, but you would have AI. And at the same time, if a business comes and says, hey, we want to adopt AI for this thing, it’s the CISO’s responsibility to secure that AI. So it’s possible that both are true, that they are resistant to some use, and they’re also supporting, or at least gradually, maybe willingly, supporting that every security tool now has AI.

Q. Has AI added more responsibilities to the CISO role, or is it also taking things off their plate?

I think the answer is both. It’s added to responsibilities because people bring their own agents to work and try to get them to do their work, which adds to the CISO’s stack. But the fact that certain tasks are now automated by machines takes things off. So thinking about the CISO’s inbox, some stuff gets piled in, but some stuff gets removed. Deep in my heart, I feel like more stuff gets added. We are helping the CISO, but we also pile some stuff in his inbox.

One thing that I feel has become better is that most of my 2023 and 2024 presentations started with this slide that had security, privacy, compliance, ethical use, intellectual property, and a bunch of other things that have nothing to do with the CISO normally, but they all showed up at the CISO’s inbox. And I felt like this was just unfair to CISOs because a CISO should not be an AI ethicist. It’s just not his job, not his skill set. But because nobody else wanted it, it was shoved to the CISO’s inbox. I feel like we are better here now. That stuff is mostly going to the right places. If it’s ethical use, it goes to legal. But the CISO deals more with proper cyber and compliance and maybe privacy. That’s settled a little bit.

And I feel like the AI-powered tools do take some tasks off. We are doing a project on our team to catalog CISO use cases for the CISO personally. Like how does the CISO specifically benefit? That’s going pretty well. And I feel like there are examples where their whole life becomes easier because of AI.

Q. There’s a big question about human in the loop and whether it’s scalable or feasible when you’ve got AI-powered attacks happening in seconds and agents doing hundreds of thousands of tasks a day. What’s your view?

I have a really funny story about that. I was always repeating the message from papers that said, well, if it’s a critical task, you should stop and give it to a human, have a human validate. And I would say, yeah, absolutely. And it became a mantra. And then I had an experience where I coded an app connected to my podcast. The app does a certain task and presents a result. And the result is correct in 85% of the cases. And it’s very incorrect for the 15%. And me, the user, had to say, okay, let me validate it. And I got tired very quickly. I think within a day I was like, human in the loop stuff? No.

I can see how people bring up self-driving cars and they say, oh yeah, we’ll hand it to a human. What do you mean, on a highway at 80 miles an hour? No. So I became a lot more tame in regards to promoting human in the loop. Let’s think about it. Can you get a human? If it’s a rare critical task where there is a human expert on tap and you absolutely have to do it because if you don’t, something would blow up, human in the loop. But it’s not a panacea for many problems.

We had a presentation about how Google uses agents for security at Google, not for customers, but for our own security. And the teams that built it basically said, yeah, we faced the same thing. We wanted to hand it to a human and then we cataloged all the things we’d hand to a human. We structured them and then handed them to a machine. Because the result was, if we know exactly what you’d hand to a human and how a human would make a decision, probably don’t need to hand it to a human.

There’s also a question of cognitive load and stress. My colleague who was my co-presenter basically said, for some tasks, it seems easier to review it if you’re an outsider. But if you’re inside, actually completing the task takes you X mental energy. And if you review, it takes X minus one, a little bit less, but you have to quickly get into it, be a little bit stressed about it. And then you have a high-stakes decision. So I probably would just rather do the task. And that also confirmed my extra care when I say, yeah, just hand it to a human.

Q. How can organizations start to build governance frameworks for agentic AI? How do you actually track what agents are doing?

I feel like the first one is still largely about the use cases. What do you use it for? And if it’s something low stakes, low risk, there’s a lot more opportunity for experimentation, a lot looser guardrails, a larger sandbox. Not really caring much about risk within that box. But as long as you can carefully risk-rank the planned use for AI, it starts to go from, in this box you can do whatever you want, to, okay, with this data and this enterprise system, here’s the control. Here’s where you cannot plug this agent into a specific system because the data that leaves may never get back.

I still feel like a lot of this ends up being first rank the use case and then deciding where does IAM, identity and access management, play? Where do you just build a classic network boundary? And of course there’s a question of what security is in the model. One of the tenets in the Google AI agent security paper is that we do use traditional controls, but we also train the model to not do certain things, or train the model to be resilient against certain attacks.

The new defense in depth here is that there’s traditional controls, blocking, preventing, hygiene security controls, and the model stuff. There is no way to say that one is better than the other. Both are a must. You can’t stop models being tricked without the model being more resilient, but it’s a lot easier to stop network access by, well, stopping network access, than by teaching the model that you don’t respond to this. So it’s defense in depth of traditional security, which does not go away, and all the model stuff that is done by data scientists.

There was a consumer-grade agent that some people connected to enterprise systems, and it was built to have only read access. I felt like this is an interesting safety measure. It isn’t the fix-all but it’s an interesting point that this agent can never write to CRM. You can only read from CRM. You can still do a lot of damage by copying stuff from CRM and pasting it on a website. The agent may not do this, but at least you have some of the building guardrails. And of course there would be people who say this is too limited. But ultimately, it should be done in an enterprise context and not by bringing tools that you bought for 20 bucks online.

Q. Are you optimistic about how AI can benefit security teams, or are you nervous about the future?

Obviously I’m optimistic. Phil Venables had this line about how AI benefits defenders in the long term. I initially was a bit of a skeptic of that. But ultimately, I’m convinced that in the long term, defenders have more data and AI would benefit defenders in the long term more than the attackers.

 It’s a somewhat unpopular message today because people say, yeah, but I can have an LLM hack. Yes, you can. But I feel like in the long term, it would favor defenders. So I’m definitely optimistic.

Written By Written By
Joel Witts
Joel Witts Content Director

Joel is the Director of Content and a co-founder at Expert Insights; a rapidly growing media company focussed on covering cybersecurity solutions.

He’s an experienced journalist and editor with 8 years’ experience covering the cybersecurity space. He’s reviewed hundreds of cybersecurity solutions, interviewed hundreds of industry experts and produced dozens of industry reports read by thousands of CISOs and security professionals in topics like IAM, MFA, zero trust, email security, DevSecOps and more.

He also hosts the Expert Insights Podcast and co-writes the weekly newsletter, Decrypted. Joel is driven to share his team’s expertise with cybersecurity leaders to help them create more secure business foundations.