🤖 AI Summary
The UK has introduced world-leading legislation to stop AI-generated child sexual abuse material (CSAM) at the source by allowing designated bodies — including AI developers and child-protection groups such as the Internet Watch Foundation (IWF) — to safely test and scrutinise models. An amendment to the Crime and Policing Bill gives the Technology Secretary and Home Secretary powers to designate authorised testers and establishes an expert group to design secure testing protocols, protect sensitive data, and support researcher wellbeing. The move responds to IWF data showing reports of AI-generated CSAM more than doubled year-on-year (199 → 426), images of 0–2-year-olds surged (5 → 92), and the most severe Category A content rose to 56% of illegal material; girls accounted for 94% of illegal AI images in 2025.
Technically, the law removes a major barrier—criminal liability for possessing illegal images—that has impeded proactive safety testing, enabling red-teaming, model audits, and evaluation of filters before release. It signals mandatory “safety-by-design” expectations for content filtering, provenance and watermarking, access controls, secure testing environments, and robust auditing to prevent model misuse for non-consensual or extreme content. As one of the first national frameworks of its kind, the policy could set a global precedent shaping how developers build, test and certify generative models to mitigate abuse while preserving secure research practices.
Loading comments...
login to comment
loading comments...
no comments yet