🤖 AI Summary
OpenAI quietly opened Sora, a TikTok‑style social app with a built‑in, high‑fidelity video generator, and the early feed is already flooded with disturbingly realistic deepfakes — most notably countless versions of CEO Sam Altman placed into surreal scenes (pikachu fields, stealing GPUs, etc.). Sora asks new users to create a “cameo” by uploading biometric data (reading numbers, turning their head) and lets account holders choose who can generate videos with that likeness (“only me,” “people I approve,” “mutuals,” or “everyone”); Altman made his cameo public, which rapidly multiplied fake Altman content. The app’s copyright policy reportedly flips the usual model by requiring rights holders to opt out rather than opt in, and some copyrighted characters and celebrity likenesses appear widely on the platform.
Technically, OpenAI says it fine‑tuned the video model to respect the laws of physics for more convincing output, and Sora can personalize results using IP and ChatGPT history, producing locally plausible details. These features make Sora a leap forward in accessible, realistic synthetic video — and a major warning sign: realistic, easy-to-create deepfakes lower the barrier for disinformation, harassment, copyright infringement, and privacy harms. OpenAI’s limited guardrails (permission toggles, mood checks, disclaimers) appear porous in practice, highlighting urgent legal, safety, and policy questions about consent, copyright workflows, and how to govern next‑gen generative video.
Loading comments...
login to comment
loading comments...
no comments yet