GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools (cloud.google.com)

🤖 AI Summary
Google’s Threat Intelligence Group (GTIG) warns that adversaries have moved from using AI for productivity to embedding live AI calls inside malware, marking a new “just-in-time” phase of abuse. GTIG documents experimental families—PROMPTFLUX (VBScript) and PROMPTSTEAL—where malware queries LLMs during execution to generate or rewrite malicious code, obfuscate payloads, and produce system-extraction commands on demand. PROMPTFLUX’s “Thinking Robot” module calls Gemini (model gemini-1.5-flash-latest) with a hard-coded API key to request VBScript evasion code and even contains commented-out self-update logic and recursive rewrite prompts; PROMPTSTEAL (linked to APT28/FROZENLAKE) used Qwen2.5-Coder-32B-Instruct via Hugging Face to generate commands that collect documents and system info, likely using stolen API tokens. This shift is significant because dynamic, LLM-driven mutation and command generation make static-signature detection much less effective and lower the technical barrier for less-skilled actors. GTIG also highlights social-engineering tactics to bypass LLM safety (posing as CTF participants or students) and a maturing underground market for multifunctional AI tooling. Implications for defenders include urgent need to harden model guardrails, improve classifiers, monitor for abused API keys, and disrupt attacker infrastructure—steps Google says it’s already taking while sharing best practices to mitigate this emerging class of autonomous, adaptive malware.
Loading comments...
loading comments...