🤖 AI Summary
            TechCrunch reporter Max Zeff highlights a growing security crisis: web browsers with built-in AI agents (Perplexity’s Comet, Microsoft’s Copilot Edge, OpenAI’s ChatGPT Atlas, etc.) dramatically expand attackers’ opportunities to steal data or act on users’ behalf. Unlike traditional browser exploits that require bugs in the browser code, AI browsers can be manipulated via “prompt injection” attacks — hidden or encoded instructions in page source or images that the agent reads and follows. Those injections can exfiltrate emails, passwords, credit cards, browsing history, or even perform actions like sending messages, making purchases, or filling forms. Researchers have already found prompt-injection vulnerabilities in new AI browsers (e.g., Comet and Atlas), and industry leaders acknowledge the problem: OpenAI’s CISO called prompt injection “an unsolved security problem,” and Brave labelled it a systemic challenge.
The technical implication is stark: adding autonomous AI capabilities increases the attack surface from exploitable binary bugs to craftable text or media content on any webpage or email. That makes mitigation harder, because instructions or data that fool models don’t need the high-skill vulnerabilities previously required to compromise browsers. For now the tradeoffs look unfavorable — convenience features that need deep access to accounts and local data come with real, presently exploitable risks. The consensus: these AI-enabled browsers aren’t ready for prime time and should be used cautiously, if at all, until robust defenses against prompt injection and scoped permissions are in place.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet