Meta won’t allow users to opt out of targeted ads based on AI chats (arstechnica.com)

🤖 AI Summary
Meta announced it will start using users’ interactions with Meta AI to personalize content and ad recommendations across Facebook, Instagram and WhatsApp, with users notified on October 7 and the change taking effect December 16. While Meta says sensitive topics (religion, sexual orientation, political views, health, race/ethnicity, philosophical beliefs, trade union membership) won’t be used for ad targeting, company spokespeople confirmed there will be no opt‑out specifically for targeted ads derived from AI chats. The initial in‑app notification language—“Learn how Meta will use your info in new ways to personalize your experience”—doesn’t explicitly mention AI until users click through, a point critics flagged and Meta disputed. For the AI/ML community this matters both technically and ethically. At scale (Meta cites more than 1 billion monthly Meta AI users), integrating conversational interactions into recommendation and ad systems introduces new data modalities that can change model inputs, feedback loops and personalization dynamics—potentially amplifying biases or producing stronger, less transparent targeting. Meta frames remaining controls as behavioral (how you interact), unlinking accounts or adjusting ad settings, but the lack of an explicit opt‑out raises questions about informed consent, data governance, and regulatory risk. Practitioners should watch how training/serving pipelines, privacy filters, and auditing mechanisms are adapted to segregate sensitive categories while continuing to monetize conversational signals.
Loading comments...
loading comments...