Humans Can’t Keep Up With AI Agents And Shouldn’t Try, Security Leaders Say

Humans risk becoming a bottleneck if they try to micro-manage AI agents, say experts

Published on Mar 24, 2026
Joel Witts Written by Joel Witts
RSAC Cover

RSAC 2026 – Cybersecurity leaders at this year’s RSAC Conference are pushing back on the idea that humans should supervise every AI agent action, warning that human-in-the-loop oversight is too slow to defend against AI-powered attacks.

“The idea of having a human in the loop in a lot of defense processes is just too slow when you have an agentic attack,” Francis deSousa, COO, Google Cloud, told an opening panel on Monday. 

With AI agents, cybercriminals are constantly scanning your network perimeter, and a breach can now take as little as thirty seconds, he warned. “There’s no scalable way for humans to defend against an AI attack… The response has to be AI fighting AI. Not humans fighting AI, otherwise we won’t win.”

DeSousa pointed to breakout times, the time from initial compromise to lateral movement, that have collapsed from 10+ minutes to under 30 seconds. Shaun Khalfan, CISO at PayPal, echoed this point: “The pace at which the attacks are happening, it’s just so much faster than we ever used to see.”

For PayPal, in cases where there is 100% certainty a threat is real, humans can be cut out altogether. “We have in some high-confidence use cases where we know it’s bad — let’s not have the human in the loop, because we have 100% certainty that this threat pattern is going to cause harm. So, let’s respond to it at AI speed and not human speed,” Khalfan explained.

“Human in the loop is not the solution for the long term,” agreed Emma Smith, CISO at Vodafone. “If we think about our traditional security controls, the ones that rely on a human or human behaviors are the ones that we don’t rely on the most, let’s face it. We rely on the ones that are technical and that are automated.”

Moscone Center, San Francisco

Leaving aside the feasibility of having a team of people reviewing potentially hundreds of AI actions, there is also the risk that humans end up stuck with the ‘boring’ task of checking off AI actions, she added.

Another area where humans cannot feasibly be expected to manually review all agentic AI output is when reviewing code. Dave Aitel, Member of Technical Staff, OpenAI, argued that in the very near future human developers will not be doing PR-level code reviews at all.

“A year from now, I don’t think you’re going to have humans doing PR-level reviews, because it’s a waste of their time, and it’s a huge bottleneck,” he told a panel on managing AI code in the wild. “I don’t look at my calculator’s internal operations to check that it’s working properly.”

Humans Overseeing The Loop

So, what does the pivot look like? Rather than humans stuck in the loop, we should instead be thinking about how humans oversee the loop, Nick Godfrey, Senior Director at Google’s Office of the CISO and former CISO at Goldman Sachs, told Expert Insights.

Smith made a similar point, arguing the focus should shift to getting humans ‘on-the-loop,’ rather than it.

“We’ve got to really think about how…the humans get insights from AI, rather than trying to be the controller or the reviewer of everything, because it’s just not going to scale,” she said.

Giving an example of this principle in practice, Khalfan explained how the PayPal team has built ‘dumb agents’ that are each given a specific offensive task and then they all report back to a central orchestration agent. But a human is still overseeing the whole process from the console, without having to be actively involved in each step.

Ultimately, companies want to adopt AI because it can bring automation and speed. As organizations deploy autonomous AI agents at scale, humans cannot be the ones making sure each action is secure. Otherwise, what’s the point of AI adoption?

“To the extent that something is autonomous, it’s able to make decisions, it’s able to remove toil from things that we’ve done before. That’s an opportunity for economic value to be created,” Jason Clinton, Deputy CISO at Anthropic, told a panel on Monday. “But autonomy is also risk,” he added. “Every organization has to decide where on that scale they are going to accept how much autonomy they want to be enabled.”

Taking this a step further, some leaders are questioning how the role of security leaders themselves will be shaped by autonomous AI agents.

“In 2025, a lot of people said: you will not be replaced by AI — you’ll be replaced by a person using AI. Honestly, I’m not sure I believe that anymore,” said Gadi Evron, CEO at Knostic. “I think I will be replaced by AI, and I’m a CEO. So, my focus really — the only answer I have — is: how can I remain relevant?”

This was echoed by Akshay Joshi, Head of the Centre for Cybersecurity at the World Economic Forum, who asked the audience: “What would leadership skills look like if we were working with a mix of humans and numerous agents? I would argue that a lot of the skills that have made many of us successful may not be quite as relevant.”