AI was supposed to make security teams more efficient, more productive, and ultimately, more secure.
But today, AI is introducing new risks that CISOs are being burdened with solving. It’s not making their lives easier; it’s making them more difficult.
AI governance has been an issue since the first consumer LLMs launched in late 2022. But while the first wave of consumer LLMs had security risks, like the potential for users to upload sensitive files, agentic models are a whole new ball game.
AI agents have access to system files, can browse the web, can delete files, and can download skills and MCPs. We are starting to see new AI supply chain attacks target open-source repos with the specific goal of compromising AI systems via attacks like prompt injection.
Many CISOs are in the dark about what models their employees are using, and what files and services those agents have access to. Unlike human users, AI agents are able to find every file in seconds, meaning over-privilege is a real risk.
Who Governs AI Deployments?
Khush Kashyap, Senior Director of GRC at Vanta, told Expert Insights that the problem of shadow AI is “exponentially bigger than shadow IT ever was.”
Expert Insights’ own research backs this up: 64% of organizations acknowledge they lack effective governance or technical controls for generative AI, rising to 76% among those that have been breached. Meanwhile, 96% of cybersecurity leaders are concerned about AI-related threats in 2026.
Molly McLain Sterling, who previously led security awareness programs before moving to Proofpoint’s cybersecurity strategist team, identified three insider risk personas that AI has created.
The first is the employee who treats ChatGPT like a trusted colleague and feeds it company secrets. The second is the YOLO developer: the engineer giving “dangerous levels of permissions” to AI agents without thinking about the security implications.
The third is the overwhelmed human-in-the-loop: the person whose team got halved because “AI was going to save the day,” and who is now drowning in work with fewer resources.
All of this is happening while boards and CEOs are pushing CISOs to adopt AI faster. The mandate is coming from the top: deploy AI, find efficiencies, stay competitive. But the budgets and tooling to do that securely are lagging behind.
Of organizations that reported a significant breach in the past 12 months, 72% directly attributed an incident to the misuse or vulnerability of generative AI tools, according to Expert Insights’ research.
Where Do CISO Responsibilities End?
With this in mind, it’s no surprise that 67% of CISOs reported increased workloads compared to 12 months ago, and 60% have experienced professional burnout in the past year.
The scope of the CISO role has also expanded in ways that would have been unthinkable five years ago. AI governance, data sovereignty, third-party AI risk, non-human identity management: these are all landing on the CISO’s desk, whether the organization is ready for them or not.
Dr. Anton Chuvakin, Security Advisor in the Office of the CISO at Google Cloud, told Expert Insights that with the move to AI, CISOs have been asked to fill roles that don’t belong to them. “A CISO should not be an AI ethicist. It’s just not their job, not their skill set,” he said.
This has improved over time, he said, but there is still an additional workload CISOs have to deal with. “It’s added to responsibilities because people bring their own agents to work and try to get them to do their work, which adds to the CISO’s stack,” he said.
Peterson Gutierrez, VP of Information Security at Barracuda, described how the role has shifted from technical gatekeeper to business partner. CISOs are now expected to answer questions “as businesspeople first,” he said.
Jon Ramsey, VP and GM of Google Cloud Security, put it another way: CISOs need to be “working on the business, rather than working in the business.” The tactical, firefighting model of security leadership is no longer sustainable.
But making that shift is easier said than done. Erich Kron, CISO Advisor at KnowBe4, described the practical dilemma: “We’re concerned about it, but in the same breath, we know that if we say no, people are going to do it anyway. So, they’re kind of running that razor’s edge.”
Slowing Down AI Is Not An Option, But AI Benefits The Defenders More
Several security leaders at RSAC made the same point: CISOs who try to block AI adoption will not last. Deepen Desai, Chief Security Officer at Zscaler, was direct about it.
“CISOs are not in a place to stop AI. But they are in a place to become that business enabler. How do I help my organization securely adopt it?” Desai told Expert Insights.
McLain Sterling agreed. “You really cannot go to that old-school model of ‘the office of no.’ I think those CISOs are quickly going to be out the door,” she said.
Francis de Sousa, VP and GM of Security Operations at Google Cloud, argued that the shift is not optional. “There’s just no scalable way for humans to defend against an AI attack,” he said. “It has to be AI fighting AI, not humans fighting AI, otherwise we won’t win.”
But there are silver linings. When it comes to AI, there are advantages for defenders too.
“We know our infrastructure better than anyone. And because we know our infrastructure better than the adversaries, we can use AI to understand our infrastructure and proactively fix exposures,” Jon Ramsey said.
“We can use AI to fix things before the adversary can attack the infrastructure. Even when the adversary is attacking, they still have to learn what the infrastructure is. That gives us a head start over the adversaries.”
For now, AI is adding complexity, accelerating threats, and stretching CISOs thinner than ever. The leaders who will close the gap are the ones working with the business, setting guardrails, and using AI to defend against AI. The rest are waiting for a promise that won’t deliver itself.