The biggest use case for AI videos: dumb pranks (www.businessinsider.com)

🤖 AI Summary
AI-generated video tools like OpenAI’s Sora 2 have matured fast enough that everyday users can produce near-photoreal, prompt-driven clips in seconds — and much of the viral output isn’t high art or disinformation so much as dumb pranks. Examples on TikTok range from a staged “dog burial on Everest” (with the watermark scrubbed) to “homeless man” scares, fake celebrity funerary footage, and staged pets ruining weddings. These clips are easy to make, look just lifelike enough to deceive, and travel off AI-native platforms into feeds where users expect reality, prompting police PSAs and pleas from families of deceased public figures; OpenAI has begun offering limited opt-outs. Technically, what’s changed is coherent motion and realism: investments in models, compute, and tooling have eliminated much of the jumpy, uncanny quality of early deepfake video (ModelScope-era artifacts), enabling smoother, prompt-based generation that social algorithms reward. The implications for AI/ML are practical and ethical — detection and provenance tools must keep pace, platforms need policy and opt-out mechanisms, and researchers should study how low-effort synthetic content amplifies rage-bait and trains recommendation systems. Some experts expect novelty to fade, but the short-term risk is a volume-driven erosion of trust and increased potential for harassment, manipulation, and legal/ethical harms.
Loading comments...
loading comments...