The agentic AI revolution is not just changing how software gets built. It is changing who builds it, how fast it ships, and what kinds of vulnerabilities end up in production.
AI coding agents have moved from chatbots to autonomous builders capable of constructing entire applications without any input from engineers, and the security implications are accelerating faster than most organizations can respond.
Manoj Nair is the Chief Innovation Officer at Snyk, where he leads the company’s Emerging Technologies and Solutions Office. His team is responsible for incubation and future acquisition strategy, ensuring Snyk’s long-term vision aligns with customer needs.
Before Snyk, Nair served as Chief Cloud Officer at Commvault, co-founded HyperGrid, and held product leadership roles at HPE, Dell EMC, and RSA Security. He holds more than a dozen information management and security patents.
Expert Insights spoke to Manoj Nair at RSAC 2026 to discuss how Snyk is approaching the agentic security challenge, why malicious skills are the new malware, and why the old approach of giving developers 90 days to fix vulnerabilities is no longer viable.
Q. What are the main themes for Snyk at RSA this year?
For us specifically, this is about EVO. We launched the vision at our AI security summit in October, and this was the GA moment for our AI SPM module. It’s focused on how companies get their governance of building with AI under control. A lot of companies have AI governance boards and COEs and a lot of documentation about what models you’re allowed to use. But they have no way to enforce visibility, understand risk, and enforce that policy. That’s the lifecycle we’ve solved with visibility, intelligence, enforcement, and continuously updating risk.
It’s been really good to go from vision to incubation to design partners to early adopters. We’ve had seven-figure commercial deals done before the GA, because that’s how much it was solving a must-have problem. It’s been deployed in some of the largest organizations in the world with over 100,000 repositories. I was talking to one of them yesterday and catching up, thanking them for the design partnership. At the end of the meeting, they said, oh, we just saw your blog that LiteLLM is compromised. Well, thank God we have EVO. And then right there in front of me, the user went to find where they have it, and they said, we’ve got to drive remediation immediately. So that’s what’s happening right now. People are grappling with the challenge of shadow AI, and the AI supply chain is obviously being targeted.
We also launched our overall vision for how we’re thinking about agent security as a solution. There are two parts: agentic dev security, and then agentic applications, people building agents and AI for their business processes. So that agentic dev lifecycle, including citizen development coming in now, and how do I control and how do I secure that? That’s been a big conversation at the show.
Q. For people who might not be aware of EVO, can you give a brief overview of what it does?
Snyk is a company that started with developer security, enabling developers with security context right where they work. Open source packages, code, infrastructure as code, containers: that whole lifecycle is something we secure by enabling the developer. With EVO, we built on top of the Snyk platform. We were looking at what you would need to do to secure agentic AI. And the aha moment was: why don’t we use agents? These are unpredictable, exhibiting uncertain behavior. We flipped the power of the agents to solve that problem.
EVO is the first agentic security orchestrator. We built it for security and it itself is a bunch of agents taking care of things, running on top of the Snyk platform. If you’re already scanning with Snyk, we have an agent that can mine that data to figure out what AI components you have. We’re using our ability to understand artifacts and supply chain. So, we’re giving existing customers a very quick 15-minute onboarding advantage. But it’s also where security teams can start to figure out how to use AI to secure AI.
Things like natural language policy. You don’t need to be an expert in any specific domain. That’s immediate value. But you can’t think of every question every organization is going to ask. We built it so they can become power users. We’re building something like a Cursor for security engineers. And we’re certifying them to become the AI security engineers. That’s the vision of EVO: how do you control the chaos and speed at which AI is being adopted?
Q. Since we spoke at Black Hat last August, how has the risk around AI-generated code and the speed of development evolved?
The speed is incredible. Last August we talked about Cursor and Windsurf, the next generation after Copilot. Copilot was the assistant. Since then, we’re seeing an explosion. I don’t think Claude Code existed last August. It’s just going back to the CLI. It’s an agentic CLI. People are firing up terminals and having Claude Code do complex tasks. The IDEs have now become a place where you review code. You’re expressing intent. So, you’re seeing this acceleration of the agentic dev lifecycle.
It’s gone from “it helps me with a little task” to people giving entire swaths of “go build me the software” or big components of software. Multiple agents working as teams. Swarm-based coding. It’s a sea change. Even from December onwards, there’s been a step change in the capabilities of both the models and the agents. Cursor is doing half a billion in revenue for something that didn’t exist about a year ago. That tells you this space is not settled yet.
And with that speed, we’re also seeing very different supply chain attacks. Terminology that didn’t exist before. MCP servers are giving agents local context and the ability to connect into your existing environment and other tools. Obviously, a lot of risk. We researched toxic flows in MCP servers. Since then, agents have become much more action-oriented. They don’t just want to chat, they want to do. This whole concept of skills came out. Skills allow agents to go from chatting to doing.
Probably the most extreme use case was OpenClaw. If it didn’t have these skills, it wouldn’t be so viral, and it wouldn’t be under so much attack. We did research on skills. We found skills are actually being exploited. It’s like malware in there. Three lines of English can compromise your environment. It’s not a binary or an executable.
You’re having all of this adoption. You’re having people trust their agents more and more. Agents are getting a lot more power. And then they’re being compromised. So how do you safely adopt? You now have to think about the problems we talked about in the past: the code is still not very secure, it’s getting better. The artifact itself has other kinds of issues that didn’t exist before, like business authorization and business logic issues. The behavior is still not deterministic, so it could go off the rails. And the rest of the world will catch up. These risks are already there. You’ve got to safely adopt and think about this agentic dev lifecycle as something completely different.
Q. It’s almost like hiring someone to do your weekly grocery shop, but they don’t know what’s good food and what’s bad food. They just pick out what they think is the most efficient, feed it back in, and you’ve got no way of reviewing that. And there will be teams of agents for every employee. How do you think about securing that?
It’s a good analogy. And it gets worse than that, because there’s somebody there who wants to poison you. So, you’ve got to think about not just the humans but the agents and securing those agents. Secure their output, check their output. Secure their environment so they’re not being poisoned, not being led to do things they shouldn’t, not being prompt injected, and their permissions aren’t being exploited. And then continuously monitor how their thought flow is going, so they’re not veering off the intent. You’ve got to do all of them.
So, for our solution, we’re scanning the skills, filtering out what agents can pick up before it enters your environment. And then as they’re producing, we’re telling them, no, that’s not good, here’s more security context. We’re using our intelligence to make their output better. And then the behavioral aspect: it’s interesting what employees will do when they have this kind of power. They could take PII and feed it to an agent. The employee thinks it’s okay. But in the moment, the agent’s going to take that and create a database and store it. That’s not good. Or pull secrets from somewhere because it needs them, and cache them insecurely. You need to know what they’re doing continuously.
We’re looking for analogies. I was thinking about fighter pilots. They’re trained with the OODA loop concept: observe, orient, decide, act. You’re pulling 5Gs, you’ve got very sophisticated machines, and stuff is coming at you very fast. That’s the analogy for the AI security engineer who has to enable all these builders. You’re going to have teams of agents. And we’re training people in that philosophy of how to use something like EVO, which is your very sophisticated machine with a swarm of agents doing the work for you. But still enable the human to 10x their abilities to keep up with that environment.
Q. At what point is the human needed? Can a human actually keep on top of all of these agent actions, or does the human become the bottleneck?
If you don’t design the system well, then the human just cannot keep up. You have to get the intent, the organizational needs, and the regulatory needs all codified so the agent can keep going. And then it’s really exception management. The team of agents is highly capable, but you have to train them for your environment. And then make sure that all the things they’re not trained for are constantly surfaced. You become managing intent and exceptions. That’s the only way it scales.
Some organizations today are thinking they can use agents because they’ve got to micromanage them. But we’ve got to move away from that. Rather than having a human in the loop, enable the system. Set the guardrails. Use capabilities that make sure you’re giving agents the right guardrails. It’s a trust but verify model. Make sure they can operate in a trusted way, then verify that they are operating in a trusted way.
Q. An OpenAI security researcher claimed that AI coding agents will soon be better than the average developer at security. But prompt injection can’t really be solved. How do you see that risk playing out?
There are obviously multiple risks. But the core of it is that generative AI is non-deterministic. Maybe somebody’s figuring out how to make it more deterministic. But so far, every frontier lab and top researcher tells me that if you take that out, you take out the magic. You basically constrain it so much that you’re no longer having that aha moment that we love when we use Claude, Gemini, ChatGPT. They come together. So, if you understand the foundation, then the risk never goes away.
Let’s assume the code output is better than human code. What about the fact that you are susceptible to prompt injection? Someone can place malicious skills in the environment. MCP servers can be poisoned. Agent falloff is a real thing. Why does it happen? Was it context? Was it long running agents? They can do something unintentionally malicious over a period of time. AWS Kiro: the agent decided to delete the database. Meta has been having more outages. These are the companies at the cutting edge. And they are suffering from this.
50% of back-end code from the latest models is either incorrect or insecure. That’s not me saying it. It’s baxbench.com. They’re testing against real-world scenarios. And if the benchmark is known to the models, they know how to game it. So, you have to look for independent sources. There’s a lot of hype. It keeps getting better, but it’s not going to be fully solved. And on the flip side, the autonomous attacker now has a sniper shot. Even if there’s one exploit, it’s not good enough to just be better than the human. You need the checks and balances and guardrails.
All of those new attack surfaces being created need to be covered. Unfortunately, as is always true for attacks, you only need to be right once. Defenders have to be right all the time. And you cannot have a non-deterministic model that sometimes says this is a bug and sometimes says this is not a bug be the defender. But it can be leveraged for what it’s good at.
Q. We’ve had announcements from Anthropic and OpenAI on code security baked into the models. What are your thoughts on that?
I think it’s great. This is the flip side of what you just asked me. If they didn’t think this was a problem, they would not have built that solution. And so, my statement about the 50% of insecure code: let me go one layer deeper. What the models are producing is the kinds of bugs that humans don’t usually produce. They’re getting better at fixing the things humans did badly. But because of the core of non-determinism, they produce a lot more business logic and authorization issues. Turns out, if you point the models back at their own code the right way with the right security context and skills, they’re actually best at fixing those kinds of issues.
Across our customer base, we’re seeing somewhere between two to 10x increase in per-developer actual vulnerabilities over the last year. The only attribution we can give for that is AI-generated code. We’re getting a lot better at focused remediation, so we’re remediating way more than before, but we’re also seeing the open issue count per developer going up. It’s a bit like marking your own homework. You’re actually a big contributor to the problem. I’m glad they’re now helping remediate it and we’ll leverage it. And we welcome innovation around security. That’s going to help us help our customers.
But security has to be right deterministically, 100% of the time. I have a blog on Claude Code security. It’s great that they did this. But it’s that, plus deterministic context, plus dynamic testing. We’re seeing certain kinds of issues that static analysis and contextual analysis from the models cannot find. But our dynamic testing engine, Snyk API and Web, is finding broken authorization and BOLA issues. Those APIs have the issues, and we’re able to find them and correlate back to the line of code in static analysis findings. It’s a combined approach. Security is a team sport.
Q. How are supply chain risks and SBOM practices evolving with AI-generated code?
It’s even more important. You don’t just need a software supply chain catalog, you need your AI supply chain catalog. So, we’ve given every Snyk customer the ability to view their AI bill of materials using the Snyk platform. If you want to manage it and govern it and use risk intelligence, that’s where EVO AI SPM comes in. But at least have visibility first. And then it’s governance. You need the software supply chain, the AI supply chain, and the agent dev supply chain, all three tracked.
Look at the GitHub Actions compromise hitting security companies. Look at how much more NPM and other supply chain issues there have been in the last year. That’s attackers using AI. You have to track it, but that’s not enough. Don’t just think about the artifact. Think about the governance around it. Think about how you’re making sure things aren’t entering your environment so easily and quickly. What is making that happen? What do you do to prevent? What do you do to continuously understand evolving risk? And how do you flip the bit? How can you use agents and autonomous techniques to remediate, without having a human in the loop? If it takes only minutes for an attacker to get into an environment, you have to automate these things.
Q. What do you see the next six to twelve months looking like?
Anyone who says they can predict what’s going to happen in this environment, I would love to meet them. All I can predict is things will move faster than we think they will. Models are going to get better at producing code, and they’re going to get better at producing attacks. You have this dichotomy of abilities. I think you’re going to find a lot more zero days, unfortunately. There’s too much business pressure to move fast without governance and guardrails in place.
And you have agents whose agency can be compromised. You have a new class of insider attacks, but it’s not a human, it’s an agent that’s going to be the reason you get compromised. On the positive side, all of that allows us, with the right approach, to move much faster than previously conceivable. We had a customer, Labelbox, a provider of labeling platforms for model companies. Two years of technical debt was remediated in two weeks using our agent-powered remediation alongside Cursor. Companies are realizing the old approach of fixing protocols, making exceptions, and giving people 90 days to fix a vulnerability is not going to work. Exploits are happening the moment vulnerabilities are disclosed.
The solution is for organizations to think about how to use the same power. Fix the foundation: no vulnerability left behind, autonomously remediated. Stop thinking in siloed, compliance-driven approaches. Think holistically. Your applications cannot have foundational issues. Then your agentic dev lifecycle has to be treated completely differently. Your developers and your citizen developers, your marketing team, your finance team, your legal team, they’re all going to use agents. They don’t understand that an agent downloading a skill, or building one, has code inside it. You can’t expose issues to citizen developers the way you expose them to human developers.
Think of it as three problems. Fixing the foundation. Securing the agentic dev lifecycle. And building agentic apps with governance that’s not sitting on paper. Constrain the blast radius by design, continuously test and try to break the application, and do all of that fast without slowing things down, because no CEO wants their agentic app to be slowed down. Everyone is a developer now, even if they don’t know it.