Underground AI models promise to be hackers 'cyber pentesting waifu' (cyberscoop.com)

🤖 AI Summary
Palo Alto Networks’ Unit 42 report exposes a growing underground market for custom, jailbroken and open‑source large language models tailored to lower‑level hacking tasks. Dark‑web vendors now sell subscription services or source‑code access for models that claim to be trained on malware datasets, exploit writeups and phishing templates. Two highlighted examples: WormGPT (and a reemerged WormGPT4) marketed as a commercialized, subscription‑driven hacking LLM (lifetime access advertised as low as ~$220) that can generate exploit code and PowerShell scripts, and KawaiiGPT, a free GitHub project that installs in minutes and presents an informal “waifu” interface while producing attack scaffolding. Both are maintained by communities and blur the line between dual‑use pentesting tools and outright cybercrime platforms. Technically, these models automate practical tasks—vulnerability scanning, code generation for lateral movement, data encryption/exfiltration and phishing content—that lower the skill barrier for attackers. Unit 42 found limitations: generated malware is often detectable and less sophisticated than high‑end automated campaigns, but the real risk is commoditization and interoperability that let nonexperts ask simple prompts and receive ready‑to‑run scripts. For defenders and red teams this means adapting controls, detection, and threat modeling to an ecosystem where specialized LLMs make basic cyber operations cheap, accessible and continuously iterated by developer communities.
Loading comments...
loading comments...