OpenAI wasn't expecting Sora's copyright drama (www.theverge.com)

🤖 AI Summary
OpenAI fast-tracked Sora, a TikTok-like app that generates 10-second AI videos (with audio) from text prompts and user “cameos,” launching under an opt-out approach that assumed rights-holders wouldn’t mind their characters being recreated. After a wave of offensive and copyrighted deepfakes (from Nazi SpongeBob to problematic fan renditions), CEO Sam Altman reversed course: OpenAI will let rightsholders and cameo owners exert more control, added user-configurable restrictions (e.g., “don’t put me in political commentary”), and pledged clearer watermarks. The company also acknowledged watermarks are already being removed and that text prompts can circumvent some facial-consent rules, prompting concern about misinformation and harassment. Meanwhile, OpenAI previewed Sora 2 in its API — giving developers access to the same ultra-realistic video model, apparently with fewer built-in safeguards. For AI/ML practitioners, the episode highlights concrete tensions between generative capability, safety, and distribution. Technically: short-form video synthesis with audio is now high-quality enough to produce recognizably copyrighted characters and near-photorealistic likenesses; watermarks and prompt/content controls are primary but brittle defenses; and API access to the model multiplies downstream risk. Operationally, Sora’s runaway adoption underscores massive compute demand — a rationale behind OpenAI’s Stargate infrastructure push, talks with AMD (possible 10% stake) and interest in building chips and a “full stack” — signaling that scale and guardrails will be core battlegrounds as video generation matures.
Loading comments...
loading comments...