A.I. Is Making Death Threats More Realistic (www.nytimes.com)

🤖 AI Summary
Activists at Australia’s Collective Shout, including Caitlin Roper, were targeted with a wave of gruesome online threats this year — images and videos showing them hanged, burned, dismembered, or otherwise mutilated. What made the attacks especially traumatic was their realism and personalization: generative AI was used to synthesize photorealistic faces, pose victims in familiar clothing, and produce convincing audio or video elements. Because these synthetic assets can be produced cheaply and at scale, harassers can tailor threats to individuals with chilling specificity, turning fantasy into a more immediate psychological and reputational harm. For the AI/ML community this raises urgent technical and policy challenges. Models for text-to-image, face‑swapping, and voice cloning enable multimodal deepfakes that amplify fear and evade simple moderation; they often rely on publicly scraped data and are increasingly accessible. Mitigations include robust deepfake detection, provenance and watermarking of model outputs, stricter model release and API controls, and platform-level takedown and identity‑verification workflows — but each has limits and can be bypassed. The episode underscores a predictable dual-use problem: generative tools yield creative value yet also create novel, scalable threats that demand a mix of technical defenses, stronger governance, and legal protections for targeted individuals.
Loading comments...
loading comments...