Anthropic has confirmed that portions of the internal source code for its Claude Code AI coding tool were unintentionally exposed due to a packaging error during a software release.
“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson told Expert Insights.
“This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
The issue occurred when Claude Code version 2.1.88 was uploaded to the npm registry with an attached source map file. Source maps are used for debugging, enabling devs to trace compiled code back to its un-compiled versions. In this case, the file reportedly exposed large sections of readable TypeScript source code embedded into the distributed package.
The file was quickly discovered by developers, with copies circulating on public repositories within hours. Analysts report that the exposed data included roughly 1,900 files and more than 512,000 lines of code.
Anthropic has since removed the affected package from the npm registry and confirmed it is deploying safeguards to avoid similar release issues in the future.
Internal Features and Security Risks Exposed
Early analysis suggests the leak revealed 44 feature flags tied to unreleased capabilities, including a background automation daemon called KAIROS, as well as task scheduling, browser control, and persistent memory across sessions.
While the firm’s core AI models were not exposed, security professionals warned that source code leaks can still cause meaningful risk. Access to system prompts, dependencies, and internal logic can enable reverse engineering, intellectual property theft, and identification of potential flaws.
This is not the first time Anthropic has made this mistake. A similar source map exposure occurred in February 2025, raising questions about whether the company’s release process has adequate automated checks for development artifacts.
The incident also comes with red flags around software supply chain security. Publishing development artifacts like source maps in production packages can reveal sensitive implementation details, especially in fast-moving AI environments where release cycles are frequent.
For security leaders evaluating AI coding tools, the incident adds a new dimension to vendor risk assessment. Source code leaks from tool vendors can expose the logic and feature roadmap those tools use to interact with your environment. Organizations that have deployed Claude Code should verify they are not running version 2.1.88, and consider whether AI tool vendors should be subject to the same software supply chain vetting applied to other third-party code dependencies.