You can’t libel the dead. But that doesn’t mean you should deepfake them. (techcrunch.com)

🤖 AI Summary
Zelda Williams publicly begged fans to stop sending AI videos of her late father after OpenAI released Sora 2 and the Sora social app, which let users generate highly realistic video deepfakes of themselves, friends (with permission via a “cameo” setting), and—including disturbingly—many deceased public figures. Sora’s invite-only rollout has already produced clips of historical leaders and dead celebrities, and OpenAI’s protections appear inconsistent: the model blocks some deceased individuals but allows others, and the dead have no mechanism to set appearance or behavior guardrails. Legally this sits in a gray zone—U.S. precedent generally doesn’t recognize libel claims for the deceased—so tech-policy constraints, not defamation law, are doing the heavy lifting. For AI/ML practitioners and policy makers the story is a warning: Sora 2 demonstrates how rapid improvements in generative video fidelity plus loose or inconsistent safety filters can erode consent, legacy, and IP norms at scale. Technical controls (identity verification, age/death databases, robust content filters), clearer opt-in/opt-out mechanisms, and stronger provenance/watermarking are needed to mitigate misuse. Without standardized guardrails across platforms—especially as competitors with fewer safeguards emerge—realistic deepfakes of both living and dead people will pose growing ethical, legal, and societal risks.
Loading comments...
loading comments...