🤖 AI Summary
OpenAI is reportedly building an AI “Sora for music” that generates songs from text or short audio prompts, potentially producing instrumental accompaniments, mood- or tempo-specific background tracks, or stems to pair with vocal recordings. The project — said to involve Juilliard students annotating musical scores (though the school denies formal involvement) — signals a shift from OpenAI’s earlier music experiments (MuseNet’s MIDI outputs and Jukebox’s rudimentary vocal tracks) toward a more sophisticated model akin to Sora 2 for video. The use of trained score annotations highlights a technical move away from purely unstructured dataset scraping toward structured, expert-labeled representations that teach harmony, rhythm, instrumentation and timing — elements that are hard for generic LLM-style training to capture.
If launched, the tool would put OpenAI in direct competition with Suno, Udio, Google’s Music Sandbox and others, while amplifying legal and ethical stakes: major labels (Universal, Warner) have already sued rival platforms for alleged copyright theft, and OpenAI faces its own disputes over training data. Key implications include accelerated growth of AI-generated music, thorny questions about dataset provenance and ownership, and shifts in how music is created and monetized — effectively making music more “programmable” but forcing industry, regulators and artists to renegotiate rights and value in a landscape where attribution and training sources are contested.
Loading comments...
login to comment
loading comments...
no comments yet