OpenAI launch of video app Sora plagued by violent and racist images (www.theguardian.com)

🤖 AI Summary
OpenAI’s new video generator, Sora 2, debuted with an integrated social feed and invite-only rollout—but within hours the feed was flooded with graphic, violent and racist scenes as well as realistic deepfake-style clips of copyrighted characters and public figures. Reviewers and reporters generated videos showing bomb and mass‑shooting scares, fabricated war-zone footage with talking AI children, and extremist slogans uttered by fabricated protesters; popular cartoon IPs were also used in offensive and fraudulent scenarios. OpenAI says it built safeguards (blocking some likenesses and refusing specific prompts) and offers takedown and copyright-dispute workflows, but the app still produced problematic outputs and its content escaped the invite-only feed into mainstream platforms, propelling Sora to No.1 on Apple’s App Store. The incident highlights acute technical and policy challenges for generative video: existing moderation and safety tooling is struggling with scale, contextual harm, and copyright enforcement, and post‑hoc takedowns or opt‑out requests lag behind rapid, viral misuse. “Slop” — an influx of repetitive low‑quality or harmful outputs — can overwhelm curation systems, while hyperreal synthetic scenes raise risks for misinformation, fraud, harassment and incitement. The event underscores the need for stronger pre-release guardrails (prompt filtering, robust training-set licensing and identity protections), transparent safety metrics, and industry/regulatory coordination to prevent lifelike synthetic media from eroding trust in real-world footage.
Loading comments...
loading comments...