🤖 AI Summary
OpenAI launched a Sora app that generates short-form videos using its Sora 2 video model, and within days feeds were flooded with clips that clearly replicate copyrighted characters and scenes from Nintendo, Sega, Microsoft and other media — apparently because Sora 2 was trained on “millions of hours” of online human-made footage, much of it unlicensed. OpenAI’s current approach is opt-out: IP holders must contact the company to remove their content from the training set, a policy critics call inadequate and legally risky. Social posts already show faithful reproductions of Sonic, Mario-like characters and anime/movie content, reviving earlier takedown battles over AI-generated images featuring Nintendo properties.
For the AI/ML community this raises sharp technical and policy questions: advanced video models are increasingly able to reproduce training artifacts (memorization/replication), exposing the limits of provenance and filtering at scale. The situation spotlights the need for clearer dataset stewardship, opt-in licensing, stronger content moderation, and technical mitigations to prevent exact IP cloning. Given Nintendo’s litigious history and growing industry use of AI (reports say over half of Japanese studios use AI tools), expect legal challenges and intensified debate over model training practices, rights clearance, and where liability should sit when generative systems mirror copyrighted works.
Loading comments...
login to comment
loading comments...
no comments yet