The rise of AI agents is creating a governance challenge that dwarfs anything the industry faced with shadow IT. Employees are adopting AI tools faster than security teams can evaluate them, open-source libraries are being used to circumvent human-in-the-loop controls, and traditional security tools simply can’t keep up.
For CISOs, the result is a larger attack surface, more complex vendor relationships, and a workload that is growing faster than their ability to manage it.
At the same time, AI is possibly the best tool security teams have for scaling their own operations. The challenge is that the complexity AI introduces is currently outpacing the efficiency gains it delivers, creating what Vanta’s Khush Kashyap describes as a “CISO burnout paradox”: the technology that is supposed to make life easier for CISOs is, for now, making it harder.
We spoke to Kashyap, Sr. Director of GRC at Vanta, at RSAC 2026 to discuss how organizations should approach AI governance, why unsighted AI agents are a hidden supply chain risk, and why building trust with employees is the most effective security strategy in the age of AI.
Q. What are the key themes for Vanta at RSA this year?
From a Vanta product perspective, some of the things that we are really excited about is one problem that CISOs and security leaders face: their data is in disparate systems, their workflows are not interconnected, the dependencies are not very well known, so everything is quite scattered. There is a lot of benefit in driving clarity through that chaos and understanding what that clarity leads to in terms of automated workflows or better decisions or better risk-managed security. And that’s Vanta’s goal and what we have been trying to solve for.
There are a bunch of new features which have launched. The one that I’m really excited about is customer commitments. Imagine a CISO who wakes up at midnight when an incident happens and they have to figure out who needs to be notified within 12 hours and who needs to be notified within 24 hours or 36 hours. These are really difficult things to manage when they don’t exist in your source of truth or your system of record, which Vanta gives with the whole customer commitments tracking. What is your SLA with patching? P-zeros, vulnerabilities which are customer-facing. All these kinds of things you can now track within Vanta.
There is also a lot Vanta is doing around third-party risk and internal risk management. If you think about third-party risk management for a very long period of time, it has been a check-the-box exercise for audits. You do it once a year, you review your vendors. But we are going towards a continuous monitoring third-party risk management program, which means that if there is a breach, you will be informed through the Vanta platform. It’s not a once-and-done exercise. It’s continuously monitored, we get signals, and that informs the posture of that vendor that we are working with.
Q. What are you hearing on the floor at RSA?
Yesterday, there wasn’t one session which didn’t talk about AI, whether it’s securing your AI-based risk landscape or using AI to drive security. More the former than the latter, but that’s what we have been hearing. And I think it’s very timely because of the way the world has changed in terms of technology consumption. It’s like when cloud came about, or even bigger, when the internet came about.
And I think it’s good conversations that are being driven right now to ensure we are doing it thoughtfully, which is where governance comes in the picture and security comes in the picture without being blockers. We have to enable our business teams, but how do we do it more thoughtfully and securely?
Q. From the CISOs and security teams you speak to, what’s the general attitude toward implementing AI? Is it enthusiasm or resistance?
Oh gosh, it’s a range. And I think for good intentions. If you talk to a company which serves the government or is in healthcare or pharma, they’re more careful. They don’t want AI to make business decisions on behalf of humans without humans in the loop to check what AI is going to do. So, they are more careful. Some of these companies are way more careful in what data goes into the models, and what kind of contracts they have with the vendors.
And then we have more tech-leaning companies who are way more advanced, and they want to try out every single thing, and their R&D team wants to run and get the next library or the next tool and try everything out and see what fits.
So, it’s definitely a range, for good intentions, because they all have different types of risk profiles. Their regulatory burden is different. And that dictates how open they are to AI, what kind of use cases they want to enable, and the risk stance of the CISOs as well.
Q. How has the dynamic changed now that we’ve moved from chatbot AI to agentic AI? It’s harder to secure what agents do than what they say.
It’s really interesting because even when we think about AI enablement in the company, for Vanta it’s really unique because our CISO, who’s my boss, she doesn’t just lead security for the company, she leads IT, and she also leads AI enablement. And she secures AI as well. So she’s enabling people on AI and securing AI. It’s a really interesting perspective seeing her do it all.
I would say that from an AI enablement perspective, we see it in four stages. One is people who are laggers, don’t want to do anything with AI. Second are people who want to use AI but use LLM engines as a replacement for Google search. Third is they start creating their custom GPTs and they try to scale themselves by using automated workflows which are repeatable in nature. And the fourth is they start solving problems that could be solved by a different tool, but they’ll just create the app on their own with AI-coded apps.
With all these different stages, different challenges come in from a security perspective. If you’re using an LLM as a search engine, which LLMs are you using? Even if they are publicly known, verified LLM engines, if you don’t have good contracts with them, your data is effectively their data that they’re going to train their models on. If you’re creating custom GPTs, what data are you passing through them? Data governance, data lineage, all these things come into the picture. If you’re creating apps, where are the apps going to live? Which infrastructure are they going to be on? Who’s going to maintain them? Who’s going to use them? Are they going to be open to the customers? If so, have they gone through the whole SDLC life cycle of checking if everything is fine? What vulnerabilities exist? What third-party vendors are you using? So, it’s really complicated from an AI enablement perspective.
The vendor perspective is chaotic too, because depending on whether it’s an existing SaaS company (which now has AI bots or AI engines), we need to update our contracts with them to say that, yes, that part of your product doesn’t access our data, but what about your AI agents? We have mandatory questions going out to them, and until they are figured out, until we have good understanding of their models and what they’re doing with our data, we turn it off. But the moment we understand it, we update our contract and we turn it on.
Q. How are you seeing the shadow AI landscape shape out, and how can organizations start to approach governance for shadow AI agents?
I think the problem here is that if you’re too restrictive as security and IT leaders, that slows people down, and then they’re going to circumvent the process and still do what they want to do, but they will just not tell you. So, governance is important, but don’t have too many steps and too many parties and too many committees, or every use case reviewed by the CEO. Don’t do all these things, because if you do, people are going to feel like you’re a blocker, you’re impacting velocity, you’re impacting productivity, you’re impacting R&D, and they will not partner with you.
A good mix is needed. For example, a few things I have seen work really well: define the use cases. For these use cases, go ahead, let us know what the POC is, and do it. But if you use this type of data, then it needs to go through a legal review. But there is still a fast-forward process in there. Have these approaches and options available to people, so they partner with you more than anything else.
Shadow AI is really critical, because people who are just going to use their personal credit cards and do something, now we have to have really interesting ways of figuring it out. Look at API tokens, look at where your data is going. Endpoint security is not enough, network security is not enough anymore, DLP is not enough anymore, because it doesn’t check for those things. So, we have to be really creative in our procurement process, creative in after-the-fact checking, send out surveys, have an open relationship with them, make them feel the security team is really partnering with them to have all these tools accessible to them instead of blocking them.
Q. You’ve described shadow AI agents as a hidden supply chain risk. Can you talk about that?
There are multiple aspects to it. When traditional SaaS tools now have AI agents which are accessible to people to use, it’s really important for our contracts that we have in place with those vendors to reflect the new terms, because of the new type of services they are providing. What kind of data can we give to them, what kind of SLAs do they have, are they going to train on our data or not, and what if they’re a sub-processor where our customer data goes into them? Then it’s even more strict. So those things in SLAs and incident response and all the notification processes are really important.
But then there’s this other side, which is open-source libraries and vulnerable components. Open-source libraries are one thing, but there are also open-source tools for AI, and sometimes those open-source tools are created to circumvent security because they want unblocked productivity. And that becomes really challenging. That requires a mix of really good security culture in the company, but also detection tools to be able to detect what type of things they are doing, what sites they are visiting, what API calls are being made.
Yesterday, I came to know that there is a library that exists which tells Claude’s Cowork to keep saying yes on your behalf. So, it circumvents the human-in-the-loop process where you have to keep authorizing things. The library exists to keep hitting yes again and again. So, the human in the loop goes away. Now it’s an open-source library. It came out a week ago. My brain went to, are people using it in the company? These are really important aspects where our supply chain risk has just materially, exponentially increased. Like we have never seen before. We were already dealing with all these shadow IT problems. Shadow AI is much bigger and much more exponentially larger.
If you treat AI as just another technology which needs to have governance in place, you can ask the right questions. Who’s the owner? Who’s accountable? What’s the data lineage? What’s allowed? What’s the contract? What’s the procurement? Who maintains it? And in AI specifically, it’s really important to define things like model drift. The model that was running in a particular way in January is not going to run in the same way in March. What is that drift? What’s the quality? What’s the data quality? What’s the upstream and downstream impact?
It’s tricky. It’s tricky for sure. And I don’t think we have answers to everything. But the further along we are in our journey of experimenting and understanding and treating our engineers and different people in the company as partners and keeping that bridge open all the time, it gives us the ability to be much further ahead in the game of securing the way they’re using things.
Q. Do you think longer term it will be scalable to keep humans in the AI process, or will agents start doing so much so quickly that it’s not feasible?
I love this question. Because Vanta builds its own AI agents too, we are very keen on understanding what the AI quality is when something is released, how do we monitor drift, and how do we check for quality. It’s really important for us to give our customers that good experience using our AI agents.
What we internally do is, we don’t use human in the loop to check every single eval. We use a machine. We use an AI agent. LLM as a judge. But there is a human in the process who is setting the parameters of the LLM as a judge. Setting, overseeing the whole architecture and how evals run. And when the results come from it, they take a look at whether the results are making sense. So, we tweak the LLM as a judge. It’s an AI agent running, an AI agent as a judge, and then a human in place. And that materially decreases the dependency on the human.
I think that’s scalable. We can make the LLMs as a judge much better, much more informed, much more accurate. Check for drifts, keep it evolving. Because we can’t just create an AI agent and walk away from it anymore. We have to constantly check for quality drifts. The security team helps the product team define the parameters, because these are security agents in the GRC product.
Q. You’ve talked about the CISO burnout paradox, the idea that AI is creating a bigger burden for the CISO than easing it. What’s your take on that?
I can see that, because shadow AI is much more exponentially bigger than shadow IT. The vendor security process is much more important and much more detailed and many more steps to be done than before. So, it’s increased the burden of security by a large extent. And I think that’s where this burnout is coming from. Your security architecture, your application security, your vulnerability management, how to check for agents, non-human identities have exponentially increased in the last few months. So, it’s not just human identities, the employees and contractors to manage, but all these non-human identities too. The workload, the things to manage, the things to secure has increased so much and that landscape is not settling down. It’s only evolving.
That being said, AI can also be used for security. And that’s top of mind for us. At Vanta, we have an agentic trust platform. We run a lot of our GRC operations and security operations by default using AI. It is helping us scale ourselves, without adding just headcount to the problem, by finding automation opportunities for lower-hanging fruit and less complex things, and still not letting AI make the decisions for us. But it is giving us the path forward. But I don’t think the curve of complexity is being matched by the curve of efficiency yet. It’s getting much more complicated. We are trying to get there, but it’s a journey for sure.
Q. Are you optimistic about the future with AI?
I am very optimistic about the future of AI. From a security perspective, I still think there are a lot of things that we need to figure out. Which vendors will get consolidated, which players will become the key players for securing agents and different aspects of security around agents. How will AI become another infrastructure element that’s protected? What compliance frameworks will come up? How will the compliance landscape evolve? How will the testing of it evolve? How would Vanta’s GRC platform look a lot different with so many agents doing so much more?
So, I think there’s more to be seen, but it’s a very exciting time to be witnessing all this change and seeing how fast it’s happening. As I said before, like dog years. Everything happens so fast. It’s an exciting time.