Malicious Large Language Models (LLMs) have been used to give low-skilled attackers the capability to perform tasks they previously had to be highly skilled to carry out.
According to a new advisory by Unit 42, these types of malicious LLMs (like WormGPT 4 and KawaiiGPT) provide offensive capabilities to anyone who uses them to create sophisticated phishing campaigns, develop malware, and execute complete attack chains.
“Many times the difference between a benign research tool and a powerful engine for creating threats lies solely in the intentions of the developer and whether or not there are any ethical barriers,” warned researchers at Unit 42.
WormGPT 4, a successor to the original 2023 version of WormGPT, has been marketed extensively throughout underground forums and Telegram channels.
Unit 42 noted that the subscription price of WormGPT 4 (USD 50/month – USD 220 for lifetime) is similar to other forms of cybercrime-as-a-service.
Using WormGPT 4, users can generate Business Email Compromise (BEC) style messages, send sophisticated phishing emails, and produce complete malware templates that include PowerShell scripts encrypted with AES-256 and optional Tor-based data extraction.
The Rise of Free, AI Hacking Tools
While WormGPT 4 was designed to commercialize the use of offensive AI, KawaiiGPT provides it to users for free. The tool is available on GitHub and can be deployed by most users in just minutes.
KawaiiGPT is designed to create spear phishing lures, Python scripts that enable users to move laterally through their networks via SSH, and automated data-extraction utilities based on legitimate Python libraries.
At the time of writing, KawaiiGPT’s developer reports more than 500 registered users, and there is an active Telegram community where users share prompts and request additional functionality.
Both of the above mentioned AI models demonstrate how malicious LLMs shorten the attack lifecycle from reconnaissance to generating a ransom note to a matter of minutes of user input.
Based on Unit 42’s analysis, companies should assume that attackers now have automated assistance to aid in carrying out attacks.
Because of this, firms are encouraged to implement stronger identity control measures, establish continuous monitoring systems, and develop mature incident response processes to prepare for the increasing number of AI-enabled attacks caused by malicious LLMs.