AI chatbots are giving out people's real phone numbers (www.technologyreview.com)

🤖 AI Summary
Recent reports reveal that Google's AI chatbots, particularly Gemini, are unintentionally exposing users' personal phone numbers, leading to a surge in unsolicited calls and privacy violations. Individuals have experienced receiving calls from strangers seeking various services after their numbers were erroneously provided by the AI. This alarming trend underscores the risks associated with generative AI's reliance on vast datasets, which often contain personally identifiable information (PII). Experts in the field of AI and privacy caution that these incidents may be more widespread than reported, noting a staggering 400% increase in privacy-related queries at companies like DeleteMe in recent months. The implications for the AI/ML community are significant, as they highlight the inadequacies of current privacy safeguards within large language models (LLMs). While companies implement guardrails to prevent the release of PII, these measures can fail, demonstrating that LLMs may learn and reproduce sensitive data from their training datasets. As the demand for high-quality training data continues to grow, the potential for exposing personal information is likely to rise. This situation calls for urgent measures to improve privacy protocols, enhance consumer controls, and address the ethical concerns surrounding data usage in AI training, particularly given existing legal frameworks may not adequately protect individuals' rights in such digital environments.
Loading comments...
loading comments...