A cybercrime crew has been observed using a chatbot’s help to discover and weaponize a zero-day. The Google Threat Intelligence Group (GTIG) says it caught them before mass exploitation began. They went on to say that they did not believe Gemini was used, though the specific model was not identified.
As outlined in a new report published by Google on May 11, 2026, the bug, a two-factor authentication (2FA) bypass in a widely used open-source system administration tool, was implemented in a Python script bearing telltale signs of AI authorship. These included educational docstrings, a hallucinated CVSS score, and textbook-clean formatting.
GTIG said they worked with the affected vendor to disclose the issue and shut down the planned mass exploitation event before it could begin.
This was no typical bug class. It stemmed from a high-level semantic logic gap instead of the messy memory or input-handling slips that scanning tools usually catch.
The team also noted how Frontier AI chatbots are getting better at “reading the developer’s intent” and surfacing dormant logic mistakes that look functionally correct to traditional scanners.
State-aligned actors are pursuing similar capabilities. APT45 (DPRK-nexus) has been observed sending thousands of repetitive prompts to recursively analyze CVEs and validate Proof-of-Concept (PoC) exploits. PRC-nexus actors are experimenting with vulnerability-focused knowledge bases packaged as Claude code skills.
Supply Chain Attacks Hit AI Components
The report documented adversaries shifting from prompting models to targeting them. In late March 2026, cybercrime crew TeamPCP (aka UNC6780) claimed responsibility for poisoning code repos behind the Trivy vulnerability scanner, Checkmarx, LiteLLM, and BerriAI.
The group dropped the SANDCLOCK credential stealer into build environments, harvested AWS keys and GitHub tokens, then monetized the access through partnerships with ransomware crews and data extortion groups.
The LiteLLM compromise drew particular attention given the gateway’s role connecting enterprise applications to multiple LLM providers.
“The AI software ecosystem has emerged as a primary target for exploitation,” the report reads. “While frontier models themselves remain highly resilient to direct compromise, the orchestration layers, including open-source wrapper libraries, API connectors, and skill configuration files, can be vulnerable.”