Hackers tricked ChatGPT, Grok and Google into helping them install malware (www.engadget.com)

🤖 AI Summary
Hackers have found a way to exploit AI systems like ChatGPT and Grok to facilitate malware installation through deceptive Google search results. A recent report by detection firm Huntress details how threat actors engage in conversations with AI assistants about common queries, prompting them to generate harmful commands. By making these conversations publicly visible and boosting them on Google, attackers can ensure these malicious instructions rank high in search results, leading unsuspecting users to execute them unknowingly. This development is significant as it illustrates a new tactic that bypasses traditional security warnings, making it easier for cybercriminals to exploit trust in well-known AI tools and platforms. The attack showcased by Huntress involved the execution of a command that allowed hackers to install the AMOS malware on a victim's device after a seemingly innocuous search for "clear disk space on Mac." As users increasingly rely on AI for advice, this incident underscores the pressing need for enhanced cybersecurity measures and a more critical approach to accepting information from AI sources.
Loading comments...
loading comments...