HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage (www.tenable.com)

🤖 AI Summary
Tenable Research disclosed seven novel vulnerabilities in OpenAI’s ChatGPT (reporting on the latest GPT-5 deployment) that enable indirect prompt injections, evasion of safety controls, persistence, and the exfiltration of private user data — including information stored in ChatGPT “memories” and conversation history. The researchers mapped how ChatGPT’s System Prompt and bio tool (memories) are appended as static context, and how the web tool splits browsing between ChatGPT and a weaker “SearchGPT” agent. By injecting malicious instructions into comment sections, specially indexed pages, or crafted chat URLs, attackers can trigger SearchGPT to return or render instructions that ultimately influence ChatGPT’s responses, sometimes without any user interaction (0‑click) or with a single click. Key technical abuses include a 0‑click attack triggered by indexing pages that respond only to SearchGPT’s fingerprinted user agent; a simple q= query injection via chatgpt.com/?q={prompt}; and a url_safe bypass that exploits Bing’s whitelisted redirect/tracking links to leak data one character at a time. Combining these techniques with “conversation injection” lets attackers pivot from SearchGPT’s limited context into ChatGPT’s memory-aware responses, enabling scalable, targeted exfiltration. The findings highlight a systemic risk in tool-augmented LLM architectures and search integration: indexing and SEO are not security boundaries, and current URL/sandboxing and memory-access controls are insufficient — urging immediate design changes around browsing isolation, URL handling, and memory-scoped access to protect hundreds of millions of users.
Loading comments...
loading comments...