🤖 AI Summary
A police-commissioned survey of 1,700 people found one in four respondents either see nothing wrong with or feel neutral about creating and sharing sexual deepfakes of non-consenting people (13% “nothing wrong,” 12% neutral). The Crest Advisory report also found 7% had been depicted in an intimate deepfake (only 51% of those reported it), one in 20 admitted making deepfakes, more than 10% said they would make one in future, and two-thirds have seen or might have seen a deepfake. Senior police and victim-support figures warned that cheaper, more accessible AI tools are normalizing sexualized deepfakes — predominantly targeting women — and that tech firms are complicit; creating non-consensual sexually explicit deepfakes is now a criminal offence under the Data Act.
For the AI/ML community this underlines urgent technical and policy priorities: reliable detection and provenance tools (watermarking, robust classifiers), stronger platform moderation workflows, privacy-preserving design, and clearer auditability and reporting paths so victims aren’t silenced by shame or disbelief. The survey also flagged a demographic pattern—younger men more likely to accept or produce deepfakes—suggesting targeted education and behavioral interventions. In short, the story is a call to combine technical safeguards, legal enforcement, platform responsibility, and public education to prevent misuse as generative models become faster and cheaper.
Loading comments...
login to comment
loading comments...
no comments yet