🤖 AI Summary
OpenAI’s newly released Sora 2 — a consumer AI video generator and social app that can synthesize people’s likenesses and voices from short clips — has drawn a public rebuke from Hollywood talent agencies including CAA, WME and UTA. Agencies condemn Sora 2’s “Cameo” capability as a threat to performers’ likeness rights, compensation and consent, demanding tighter controls or removal of clients’ images. OpenAI has already acknowledged concerns in a blog post, said it’s tweaking Sora 2’s parameters, plans to honor estate removal requests, requires celebrities to opt in by uploading a Cameo for others to use, and has hinted at future guardrails and monetization/partnerships to share revenue.
The clash highlights a core tension between fast-iterating AI product development and Hollywood’s preemptive licensing model. Technically, Sora 2’s ability to emulate voice and appearance from minimal data makes viral, infringing deepfakes an immediate risk; current mitigations rely heavily on opt-in permissions rather than proactive detection, opt-out defaults, or industry licensing frameworks. The dispute raises key implications for the AI/ML community: designers will likely need stronger consent systems, provenance and detection tooling, default restrictions for public figures, and new commercial licensing or revenue-sharing mechanisms to avoid litigation and protect creators’ rights.
Loading comments...
login to comment
loading comments...
no comments yet