Sora 2 Can Generate Videos of Celebs Appearing to Shout Racist Slurs (www.rollingstone.com)

🤖 AI Summary
OpenAI’s new Sora 2 video generator — launched invite-only and praised for more realistic imagery and physics — is already being used to create harmful deepfakes: researchers at Copyleaks documented a viral “Kingposting” trend where celebrities’ Cameo-enabled likenesses (e.g., Sam Altman, Mark Cuban, Jake Paul, xQc) are depicted as airplane passengers shouting racist slurs. Users evade Sora’s safeguards by prompting with coded or phonetically similar terms to produce audio that mimics slurs; resulting videos (some Sora-watermarked) are downloadable and rapidly propagate across platforms like TikTok. OpenAI has taken partial steps — IP opt-ins for remixing and blocking certain disrespectful depictions — but moderation gaps persist and creators are forced to manually delete abusive content. For the AI/ML community this highlights a familiar but escalating problem: model safety is an adversarial arms race. Prompt-based evasion circumvents text-audio filters, watermarking and opt-in consent don’t stop redistribution, and hyperrealistic synthetic video is outpacing human and automated detection. Key technical implications include the need for robust adversarial testing of multimodal filters, stronger provenance (cryptographic watermarking and traceable metadata), improved audio-speaker disentanglement, and platform-level policies to limit downstream harm. The incident underscores that scale and realism in generative models demand parallel advances in safety tooling, governance, and rapid response mechanisms.
Loading comments...
loading comments...