🤖 AI Summary
New research from NYU Tandon reveals that large language models (LLMs) can autonomously execute complete ransomware attacks, marking a significant evolution in AI-driven cyber threats. The team developed a proof-of-concept system dubbed “Ransomware 3.0” (also known as "PromptLock"), which performs all phases of a ransomware attack—from system reconnaissance and file identification to data theft/encryption and demand note generation—across diverse platforms including Windows, Linux servers, and embedded devices like Raspberry Pi. This work highlights how AI can orchestrate complex cyberattacks without human intervention, presenting new challenges for cybersecurity defenses.
Technically, the ransomware prototype uses AI-generated scripts, produced on-the-fly by querying open-source LLMs, to tailor attacks for each victim’s environment. This results in uniquely generated code for every attack instance, making traditional malware detection methods based on signatures or behavior patterns largely ineffective. The prototype demonstrated high accuracy in identifying sensitive files (63-96% depending on the environment) and required minimal AI usage costs—around $0.70 per attack using commercial APIs, while open-source models reduce costs to near zero. Such affordability and automation could democratize ransomware deployment, enabling less skilled threat actors to launch sophisticated, highly personalized extortion campaigns.
Beyond exposing a new AI-powered threat vector, the research serves as a crucial early warning, urging cybersecurity teams to develop novel detection strategies that monitor sensitive file access and control AI service usage. By publishing these findings under strict ethical and controlled conditions, the NYU researchers provide the AI/ML and security communities with foundational knowledge to preemptively counteract the rise of autonomous AI-driven ransomware.
Loading comments...
login to comment
loading comments...
no comments yet