Threat actors have been increasingly embedding artificial intelligence (AI) into their cyberattack workflows, using the technology to accelerate reconnaissance, social engineering, malware development, and post-breach operations.
According to new research from Microsoft Threat Intelligence, most malicious AI usage today focuses on generative tools capable of producing text, code, or media. Attackers are using these models to draft phishing emails, generate malware scripts, translate content, summarize stolen data, and automate infrastructure tasks, reducing the technical effort required to launch campaigns consistently.
Instead of replacing human operators, AI currently acts as a force multiplier, enabling threat actors to deploy attacks faster and at greater scale.
“Threat actors are leveraging AI-enabled attack chains to increase scale, persistence, and impact,” researchers from Microsoft Threat Intelligence noted in the report.
North Korean Campaigns Illustrate AI-Driven Operations
Microsoft highlighted several examples linked to North Korean state-aligned groups, including Jasper Sleet and Coral Sleet, which rely on AI to support large-scale identity fraud operations involving remote IT workers.
According to the report, threat actors are using generative AI to craft resumes, professional emails, developer portfolios, and social media personas. These fake identities help attackers obtain legitimate employment at organizations and long-term access to corporate systems.
The researchers also observed the groups using AI tools to create phishing messages in several languages as well as tailored social engineering content based on scraped information on potential targets.
In some cases, AI-generated imagery and deepfake technologies were used to produce professional headshots or alter identity documents to support the fraudulent personas.
Microsoft has also detected attackers attempting to bypass AI safeguards through “jailbreaking”, prompt techniques designed to trick models into creating malicious code or disclosing sensitive information.
As generative AI continues to dominate current activity, the tech giant said attackers are beginning to experiment with “agentic AI” systems capable of autonomous decision-making and multi-step task execution. Although still limited in real-world use, these systems could potentially end up enabling semi-automated cyber operations.
Despite the risks, Microsoft said AI also strengthens defenders when applied properly through threat intelligence, automated detection and coordinated disruption efforts across security platforms.