As companies race to adopt AI coding assistants, security leaders are sounding the alarm over lack of visibility and management, according to a new survey.
Cycode asked 400 CISOs and cybersecurity professionals about how their team was dealing with product security risks in the age of AI.
They found that for security leaders, AI generated code vulnerabilities and AI tools are now a top security priority, ranked above supply chain risks, secrets in code, and cloud misconfigurations.
Every security leader surveyed said they were planning to increase their investments in AI security controls in response to these fears.
Despite the concerns, use of AI code shows no sign of slowing down. 100% of organizations are using AI code in their codebase, and 97% of respondents said they were using or testing out AI code generation tools.
In fact, almost 1/3 of respondents said that AI was now generating the majority of code within their organizations. Productivity gains, improved quality, and faster time to market were all cited as reasons for adoption.
As this adoption continues, the visibility gap between AI adoption and AI management is growing.
81% of security leaders said they feel that they lack visibility into AI use, with 65% saying that they feel they are more at risk when using AI.
Cycode has called this ‘Shadow AI’ – meaning AI tools that have been implemented without security controls or oversight.
Less than half (48%) of the leaders surveyed had a formal AI governance framework in place to track Shadow AI.
This blind spot can introduce risks via prompt engineering, poor quality code containing vulnerabilities, or data leakage if developers are using unmanaged AI code generators.
The survey found that 100% of the security leaders surveyed plan to invest more budget in AI-related security initiatives over the next 12-months.
Nonetheless, Cycode warns that a supply chain attack targeting Shadow AI is likely to take place in the near future.
“The findings make it clear: AI development is no longer a future trend; it is today’s reality. As security struggles to keep pace with this rapid adoption, the stage is set for a significant supply chain breach, with Shadow AI as the attack vector,” said Lior Levy, CEO and Co-Founder of Cycode.
Securing against Shadow AI risks will require improved visibility, policies and controls, Devin Maguire, Senior Product Marketing Manager, Cycode tells Expert Insights.
“The visibility challenge is pervasive but foundational. Organizations need ways to detect and inventory AI technologies, especially when those technologies are used in the Software Development Lifecycle.”
“Second, they need policies to provide guardrails and direction for AI adoption. The goal is not to slow adoption but rather to enable responsible use. Finally, organizations need controls in place to manage and enforce those policies.”
“With these three core elements, you can govern AI with clearly defined policies, detect AI technologies that violate those policies, and enforce controls to keep pace with rapid and ubiquitous AI adoption.”