New AI-Enabled Malware Adapts Its Behavior Mid-Attack

Google researchers have identified AI-enabled malware that dynamically changes its behaviour mid-execution.

Published on Nov 10, 2025
Caitlin Harris Written by Caitlin Harris
New AI-Enabled Malware Adapts Its Behavior Mid-Attack

Threat actors are no longer leveraging AI just for productivity; they have begun using AI-enabled malware in the wild, says Google Threat Intelligence Group (GTIG). 

According to Google, the new family of malware integrates Large Language Models (LLMs) to be able to dynamically alter its own behavior mid-execution. This “just-in-time” use of AI enables the malware to dynamically generate malicious scripts, obfuscate their own code to avoid detection, and create new functions on demand.

So far this year, GTIG has observed five types of malware with novel AI capabilities.

FRUITSHELL is a publicly available reverse shell malware that enables a threat actor to execute arbitrary commands on the compromised system. It includes hard-coded prompts designed to enable it to evade LLM-powered detection or analysis.

PROMPTFLUX is a dropper—a type of Trojan designed to install other malware on a computer. PROMPTFLUX uses the Google Gemini API to regenerate or copy itself by prompting the LLM to rewrite its own source code, then saving the new version of itself to the Startup folder. This allows it to establish persistence on the compromised system, as well as spread by copying itself to removable drives and mapped network shares.

However, since making these observations, Google has taken action to mitigate PROMPTFLUX. 

“Google has taken action against this actor by disabling the assets associated with their activity,” the company says. “Google DeepMind has also used these insights to further strengthen our protections against such misuse by strengthening both Google’s classifiers and the model itself. This enables the model to refuse to assist with these types of attacks moving forward.”

PROMPTLOCK is a cross-platform ransomware that enables threat actors to scout filesystems, exfiltrate data, and encrypt files on Windows and Linux systems. It leverages an LLM to generate and execute malicious scripts at runtime, on demand. 

PROMPTSTEAL is a data mining malware. It uses an LLM to generate one-line Windows commands that, when executed, enable it to collect system information and documents stored in specific folders. It then sends that information to the threat actor. 

QUIETVAULT is a credential stealing malware the exfiltrates GitHub credentials and NPM tokens. It also leverages an AI prompt and AI CLI tools to scan the compromised system for other secrets, which it then also steals. 

The Bigger Picture

This year marks the first time that Google’s researchers have observed such a threat being used in the wild. While some cases have proven to be experimental, they highlight a clear shift in threat actor techniques; instead of using AI tools simply to boost productivity by supporting them on a technical level, threat actors are now integrating AI into intrusion activity itself. 

“This represents a significant step toward more autonomous and adaptive malware,” GTIG’s threat intelligence report states.

AI-enabled malware makes cybercrime more accessible to less tech-savvy adversaries and, as such, these services are becoming increasingly popular on underground forums. This trend is something that we can only expect to see more of as the underground market for AI-enabled tools continues to mature. 

To help disrupt this market, developers need to implement “robust security measures and strong safety guardrails” when working on AI systems, says GTIG. 

“The potential of AI, especially generative AI, is immense,” the company says. “As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.”

At the same time, threat detection and response providers need to deploy advanced detection methods in order to help organizations stay ahead of this new evolution of malware.

“Modern detection and response platforms are increasingly effective at identifying advanced malware through behavioral analytics, yet autonomous and AI-assisted malware introduces a new class of challenges,” Eric Russo, Director of Defensive Security at Barracuda Networks, told Expert Insights. 

“Recent threat intelligence reporting has confirmed that some malware now calls large language models during runtime to regenerate code, obfuscate logic, or adapt to its environment. These behaviors can momentarily outpace static or cached behavioral baselines.

“Staying ahead of sophisticated threats requires moving beyond traditional telemetry to approaches that predict and contain attacks early in their lifecycle. Advanced detection methods, such as identifying ransomware groups in their initial stages and automating threat response to isolate compromised hosts, are critical to neutralizing threats before they can inflict damage to infrastructure.”