🤖 AI Summary
EU privacy and security experts have issued a stark rebuke of the Council’s revised Child Sexual Abuse Regulation (CSAR), warning that recent clarifications — including the removal of an explicit requirement for mandatory scanning — still leave “high risks to society.” Eighteen leading academics from institutions such as ETH Zurich, KU Leuven and the Max Planck Institute say the Council’s November text implicitly permits so‑called “voluntary” automated analysis of private chats and mandates age verification for apps and messaging services. An endorsement by EU ambassadors on Nov. 19 would lock the Council’s position ahead of likely formal adoption in December and tougher negotiations with the European Parliament next year.
Technically, the experts argue the proposal creates two grave problems: first, expanding automated AI analysis to flag ambiguous “grooming” behaviours will produce many false positives because current ML models lack the precision and contextual understanding required, risking investigator overload and diversion from real cases. Second, mandated age‑verification inherently relies on biometric, behavioural or contextual data that cannot today be done in a genuinely privacy‑preserving way, creating discrimination, data‑exploitation incentives and exclusion of people without digital IDs. They also warn such controls are trivial to evade (VPNs, non‑EU providers), potentially driving children to less secure services. The letter calls for scrapping these elements or tightly limiting scanning to criminal suspects only.
Loading comments...
login to comment
loading comments...
no comments yet