Anthropic has launched a limited research preview of Claude Code Security, a new capability within its Claude Code platform designed to identify software tools’ security flaws before real-world deployment.
The release signals a deeper push into application security testing, specifically in the security stack’s static analysis layer.
From a technical standpoint, Claude Code Security operates in the same general category as Static Application Security Testing (SAST) software tools.
These tools are designed to scan source code for flaws without running it. Traditional SAST platforms rely on hard rules and known vulnerability patterns, which makes them suitable for identifying common issues such as hardcoded secrets or outdated encryption libraries.
Anthropic said Claude Code Security instead relies on Large Language Model (LLM)-based reasoning to examine how code components work with each other, as well as observe how data moves through an application.
The new approach is intended to identify more subtle, context-dependent flaws, including business logic weaknesses and broken access controls, issues that often evade rule-based scanners.
Where it Fits in the Security Stack
Claude Code Security is not a runtime protection tool, and it does not replace Dynamic Application Security Testing (DAST) or runtime application self-protection.
Instead, it fits into the pre-deployment phase of software development, alongside SAST and analysis of software composition.
According to Anthropic, findings go through a multi-stage validation process in which the model attempts to verify and/or challenge its own conclusions before presenting them to analysts. Each issue is assigned a severity and confidence rating, alongside suggested patches that require human approval before being applied.
In internal testing using Claude Opus 4.6, Anthropic reported identifying more than 500 vulnerabilities across production open-source codebases. The company explained that it is conducting responsible disclosure with maintainers.
For CISOs and AppSec teams, the main takeaway is practical: AI-assisted reasoning may complement, but not replace, traditional software tools used for static analysis.
Security leaders considering these capabilities should think about integration into existing developer workflows, management of false-positives, and governance controls to make sure findings are reviewed before remediation.