🤖 AI Summary
Security researcher Viktor Markopoulos (FireTail) revealed an “ASCII smuggling” technique that injects invisible payloads using special characters from the Unicode Tags block to hide instructions from users while still being parsed by LLMs. He demonstrated the attack against Google Gemini’s integrations — Calendar invites and email — showing attackers can hide commands in invite titles, overwrite organizer details (identity spoofing), smuggle meeting descriptions/links, or plant hidden instructions in emails that an LLM with inbox access could follow to search for and exfiltrate sensitive data. Similar payloads embedded in web product descriptions can also cause agentic models to surface malicious URLs or false information to users.
The significance is heightened now that models like Gemini act agentically and have broad access to user data and workflows: invisible prompts can convert simple phishing into autonomous data-extraction or misinformation engines, and can silently alter model behavior or poison downstream data. FireTail found Claude, ChatGPT and Microsoft Copilot applied input sanitization and resisted the attack, but Google declined to treat the issue as a security bug after Markopoulos’ Sept. 18 disclosure. The case underscores the need for robust input normalization, Unicode-aware sanitization, and platform defenses for LLMs that operate across email, calendars and web content to prevent stealthy instruction injection and data-leakage risks.
Loading comments...
login to comment
loading comments...
no comments yet