🤖 AI Summary
A surprise hit earlier this year—an apparently new band called the Velvet Sundown whose single logged over 3 million streams—turned out to be a “synthetic music project” composed and voiced with AI. That revelation rekindled a debate about whether rock is so formulaic that bland, algorithmically generated pastiche can succeed commercially. The easy answer is no: while AI can mimic textures and familiar tropes well enough to generate convincing, streamable tracks, it often lacks the deliberate weirdness, risky decisions and narrative voice that make human music feel alive.
Enter Geese, a Brooklyn quartet whose album Getting Killed mixes punk, free jazz and Radiohead-like sonics into jagged, unpredictable songs—odd time signatures, abrupt tempo shifts, panned wah-wah guitars, trombone squawks and vocal contortions that read like purposeful creative misbehavior. Their work highlights what current AI music systems struggle to emulate: collaborative, idiosyncratic choices that privilege surprise and emotional ambiguity over polished conformity. For the AI/ML community this story frames two takeaways: generative models can replicate style and monetize it, but evaluating and generating musical novelty, purposeful risk and cohesive character—qualities central to artistic progress—remains a hard, human-centered challenge.
Loading comments...
login to comment
loading comments...
no comments yet