🤖 AI Summary
AI-powered web browsers such as OpenAI’s ChatGPT Atlas and Perplexity’s Comet are pushing “agentic” web browsing—AI assistants that click, fill forms, and act on users’ behalf—but cybersecurity researchers warn these agents introduce significant new privacy and security risks. To be useful they often request deep access to email, calendars and other personal accounts, yet in practice they’re mainly helpful for simple tasks and can struggle with complex workflows. As adoption grows, the potential for large-scale abuse grows too: prompt injection attacks—where malicious content on a webpage tricks an agent into executing adversarial instructions—can expose sensitive data or cause unwanted actions like purchases or posts.
Technically, prompt injection is a systemic, hard-to-solve problem because LLMs have weak separation between core instructions and consumed data; attackers have progressed from hidden text to sophisticated encodings (including images with embedded instructions). Brave’s research calls indirect prompt injection an industry-wide challenge, and both OpenAI and Perplexity acknowledge the frontier nature of the threat. Firms have deployed mitigations—OpenAI’s “logged out mode” and Perplexity’s real-time detection—but experts call it a cat-and-mouse game. Practical advice for users: limit agent access, silo agents away from banking/health accounts, use unique passwords and MFA, and delay giving broad control until defenses mature. The issue has major implications for browser security models and how we architect agent trust boundaries going forward.
Loading comments...
login to comment
loading comments...
no comments yet