Signal leaders warn agentic AI is an insecure, unreliable surveillance risk (coywolf.com)

🤖 AI Summary
Signal executives, President Meredith Whittaker and VP Udbhav Tiwari, raised serious concerns about the security and reliability of agentic AI during their presentation at the 39th Chaos Communication Congress in Hamburg. They categorized it as an "insecure, unreliable surveillance risk," emphasizing that current implementations of AI agents, which need access to sensitive personal data for autonomous tasks, expose users to significant threats from malware and hidden prompt injection attacks. Notably, Microsoft’s agentic AI tool, Recall, which tracks user activities and compiles a comprehensive database, has come under scrutiny for not sufficiently mitigating these vulnerabilities, leading Signal to implement temporary measures like app flags to prevent screen recording. Whittaker illustrated the unreliability of agentic AI, highlighting that as the complexity of tasks increases, the probability of success dramatically decreases, with best-case scenarios failing 70% of the time. The executives called for immediate industry changes: stop the reckless deployment of AI agents, establish default opt-out options for users, and ensure radical transparency about AI operations. Their warnings signal a critical juncture for the AI/ML community, urging a reevaluation of existing practices to prevent a loss of consumer trust and avert a potential crisis surrounding the use and deployment of agentic AI technologies.
Loading comments...
loading comments...