🤖 AI Summary
The EU’s “Chat Control” proposal — officially the Child Sexual Abuse Regulation — has been quietly revived in a repackaged form and was greenlit by Coreper, moving it closer to Council adoption (possibly as soon as December). The new draft removes an explicit obligation for on-device scanning but endorses “voluntary” mass scanning by platforms, introduces mandatory age‑verification systems, and includes a vaguely worded Article 4 requiring “all appropriate risk mitigation measures” that could be used to pressure encrypted services to implement client‑side scanning. Critics warn this procedural sleight-of-hand shifts debate from public fora to opaque institutional channels while preserving the core logic of normalised, large‑scale monitoring of private communications.
Technically and legally, the proposal is troubling: it encourages automated AI-driven grooming detection that current models cannot reliably distinguish from benign conversation — real-world data show high false‑positive rates (roughly half of German police reports and ~80% of Swiss machine reports are non-criminal). Client‑side scanning or backdoors would undermine end‑to‑end encryption, widen attack surfaces, and create permanent surveillance infrastructure that can be repurposed (function creep). Mandatory age checks would require intrusive biometric/behavioural data or IDs, threatening anonymity for journalists, whistleblowers and vulnerable users. Experts argue the scheme would do little to remove decentralised abuse material while violating ECJ precedent on automated, generalised surveillance and exporting privacy harms beyond the EU.
Loading comments...
login to comment
loading comments...
no comments yet