Security Community Slams MIT-Linked Report Claiming AI Powers 80% of Ransomware (socket.dev)

🤖 AI Summary
Researchers at MIT Sloan and vendor Safe Security published a paper claiming “80.83%” of ransomware incidents are “AI-powered,” sparking sharp pushback from the security community. Critics — notably researcher Kevin Beaumont and threat analysts at Sophos, Mandiant and others — say the paper provides no dataset or definition for “AI-enabled,” misattributes CISA advisories that don’t mention AI, and even labels defunct families like Emotet as AI-driven. The report was produced through MIT’s corporate CAMS program and co-authored by Safe Security employees, raising concerns that vendor interests and institutional branding gave speculative, marketing-aligned claims undue credibility without peer review or transparent methodology. The dispute matters because it shapes risk priorities and procurement: independent data (ENISA 2025, Verizon DBIR, Mandiant/Sophos) shows adversaries are experimenting with AI — e.g., AI-generated phishing, voice cloning, scraping to identify targets — but real-world ransomware campaigns remain driven mainly by credential theft, stolen access, infostealers and exploitation of weak authentication (lack of MFA). While researchers have demonstrated AI-assisted malware in labs, there’s no evidence of AI orchestrating ransomware at the scale the paper asserts. The technical takeaway for defenders: focus on proven mitigations (MFA, credential hygiene, patching, detection of initial-access brokers) and demand transparent data, clear definitions and independent review before accepting alarmist AI narratives.
Loading comments...
loading comments...