BLACK HAT 2025, LAS VEGAS – Vibe coding is the hottest developer talking point of 2025. A new economy of AI-generated apps has made vibe coding startups Replit, Lovable, and Cursor worth millions of dollars, almost overnight.
When millions of people can create apps with just a few prompts and no security reviews, breaches are inevitable. But in the world of AI, speed is everything. How can you ensure robust security is in place, without your business losing the competitive advantages AI-assisted code development can bring?
Expert Insights has been at BHUSA25 this week, speaking with application and product security experts to answer this question and more.
Fixing Vibe Vulnerabilities
Vibe coding has already become a ubiquitous buzzword. But how serious are the risks really? The answer is “very”, says Manoj Nair, Chief Innovation Officer at Snyk. “The risks are real,” he says. “People now recognize that AI-generated code is more insecure than human-generated.” Nair cites a study conducted by Georgetown University, which found that approximately 48% of AI-generated code was insecure.
Security teams cannot avoid this risk, as almost every single development team is using AI tools in some capacity. And if corporate is telling them not to use AI, there’s a high likelihood they’re just doing it under the table.
“Some people are doing the ostrich mythology: burying their heads in the sand and thinking devs are not using coding agents,” Nair says. “You’ll be surprised. Cursor has had amazing growth, but rarely do you hear that an enterprise has licensed Cursor. It’s the devs; they’re paying twenty bucks a month or whatever. So, there’s shadow AI in the coding tool chain.
“Very rarely do I find a security team has successfully blocked the adoption. No CEO wants to be left behind on the AI race.”
For years, the security industry has pushed the notion of “Shift Left,” moving security back into the developer process. From Nair’s perspective, with AI tools, we need to go further than Shift Left. Snyk has launched a new framework: Secure At Inception, which provides “deeply-integrated and real-time security scanning that runs at the point of code generation” within LLM coding tools, like Cursor.
So, unlike traditional SAST or DAST, in which vulnerabilities are found in human- or AI-written code after it has been written, vulnerabilities are being found, fixed and remediated before they’re surfaced to the developer. These recommendations aren’t presented to a human to fix issues—they are solved before the code can ever go live. “This approach is very important,” Nair says. “It flips [AI code] from being less secure to more secure.”
This flips much of the typical debate around vibe coding. What if AI helped us write more secure code, with guardrails baked from the start?
“There’s such a huge potential when you see this explosive growth of vibe coding to truly secure code better than humans would. That’s the thing that people need to get educated on,” Nair says. “Let’s not do [security] in this era like we did in the cloud era, which was after the fact. Let’s think about how to be proactive. Let’s secure AI by default.”
Good Vibes?
During a briefing for cybersecurity founders and investors on vibe coding, Shahar Peled, Co-Founder and CEO at Terra Security, explored how security, governance, and safety can be built into this new wave of development.
“I just want to be clear: I love vibe coding,” he said. “I think it’s an amazing shift toward the future—we just have to be really thoughtful about how we do it. And yes, if we do use it for a core business unit, it needs to be done properly with the right security.”
Threat researchers from Intel471 present a more cautious outlook on the risks of AI. Mike Mitchell, VP of Threat Hunt Intelligence, told Expert Insights: “I think the biggest issue might be some of those companies using AI agents and not understanding how to deploy them, and then causing security issues in their own environment that can be exploited. It’s going to open a lot of holes in process and product.”
A recent news story highlights this exact risk: an AI coding agent went “rogue” and deleted a company’s entire code database.
“AI doesn’t understand security,” Mike adds. “They don’t lock down ports and verticals.”
Product Security Perspectives
Application Security provider Cycode has assembled an “all-star” team (complete with baseball cards) comprised of product security experts. At a panel held at Black Hat USA, four product security leaders from this group discussed their thoughts on how AI is changing software products, and how they are evolving their cybersecurity strategies in an AI world.
Here’s what they said:
- Brad Tenenholtz, Product Security Officer, BD: “The average developer writes about 10-100 lines of code a day. Now, you can put together thousands in a couple of hours. So, all it’s really done is increase the volume we need to secure—which means you better make some good threat models, and you better know what’s really vulnerable. I agree that AI is going to force us to adopt these better practices. But in that way, maybe it’s a good thing. Maybe we come out of this more secure.”
- Julie Davila, VP Product Security, GitLab: “A lot of the focus on code assistance is going to expose any kind of fragility in the mechanics that support the software factories that we all support. If you have a sudden a 10x increase in your merge requests, it’s really going to test how well you manage toil. The other challenge is that, as we move into the agentic world, contributions not just on the code side, but also on the infrastructure side are going to increase. So, how do you think about it in terms of authorization? How do we do attribution? Sure, can’t blame the machine, but you still have to know that a machine was involved in that process. What does that look like?”
- Nikola Dalcekovic, Product Security Officer, Schneider Electric: “Traditionally, developers have a design in their head. With AI-assisted coding, we are starting to prompt. So, it’s really a question of whether you prompted it well enough to encompass security requirements or design intent in the implementation phase. I think—especially for junior level engineers—if they overuse and over rely on AI, there is a risk of them not giving proper context to AI when they use it for development. That’s the risk we need to address.”
- Terry O’Daniel, Head of Security, Amplitude: “AI could potentially help with two key problems in security. One is the signal-to-noise ratio of alerts, and the other is where I spend my very limited human attention; I only have so many security engineers, so I have to ruthlessly prioritize every day where I’m going to spend those human hours. AI really lights a fire under both those challenges. I heard an interesting analogy about this: ‘Here’s AI, congratulations, everyone got a promotion, you just don’t get any staff. You have to use AI to be your team to deliver more.’ So, it’s like we have to treat AI as a tool and a teammate.”
The Future
One of the coolest products we saw out at Black Hat this week was Sola Security. Sola enables security developers to develop their own AI-generated security apps—think Lovable, but for cybersecurity teams. They raised a $30m seed funding round back in March.
CEO and Co-Founder Guy told Expert Insights that, with apps like Sola Security, “the sky is not even the limit” for security teams.
“I think that we are in a very interesting period in the evolution of cyber security. Five years from now, we’ll ask ourselves how we did security before [AI]. And I think this is a really exciting time to be part of the industry.”
One thing is for certain: vibe coding is not going anywhere. AI coding and app building tools will continue to grow in popularity. In the short term, they are likely to continue to present new risks, and we are likely to see some apps breached and taken down due to AI generated vulnerabilities—to say nothing of attackers targeting AI code generation models themselves.
But as Ty Sbano, CISO at Vercel, put it this week: “If the word vibe is a real buzzkill for you, don’t worry; it won’t live that much longer. Because it’s just coding!”