Google warns criminals are building and selling illicit AI tools - and the market is growing (www.techradar.com)

🤖 AI Summary
Google’s Threat Intelligence Group warns that criminals are not just using AI to boost productivity but are now engineering purpose-built AI tools for active cyber operations. Their report highlights “just‑in‑time” (JIT) AI malware like PROMPTFLUX — VBScript that queries Google’s Gemini API for specific VBScript obfuscation and evasion techniques to perform self‑modification at runtime, a capability intended to evade static signature detection. GTIG found underground marketplaces selling ready-made AI tooling, threat actors posing as “researchers” to trick model APIs into disclosing prohibited guidance, and early-stage samples that suggest dynamic obfuscation and targeting techniques are still being refined. This shift matters because it lowers the skill barrier for complex attacks and makes traditional defenses (static signatures, naive content filters) less effective. The report flags links to state‑sponsored actors (Iran, China) using AI for reconnaissance and data exfiltration, underscoring geopolitical risk. For the AI/ML community the implications are clear: model providers must strengthen API abuse detection, provenance and access controls, and prompt‑response filtering; defenders should invest in runtime and behavioral detection, model watermarking, and forensic telemetry. As JIT LLM-assisted malware evolves, collaboration between platform operators, security teams, and policy makers will be essential to close the new attack surface.
Loading comments...
loading comments...