OpenAI staff grapples with the company’s social media push (techcrunch.com)

🤖 AI Summary
OpenAI launched Sora, a TikTok‑style app that auto‑generates short AI videos, and the debut has provoked vocal concern from current and former researchers. Staff including John Hallman and Boaz Barak praised the technical work but warned the product is “scary” and premature given deepfake risks and the well‑known social harms of feed algorithms. The rollout — which already surfaces Sam Altman deepfakes — has split insiders between defending consumer products as a funding/distribution strategy for AGI research and worrying that pursuit of growth could undermine OpenAI’s nonprofit safety mission. Altman argues revenue from consumer products funds compute for science and AGI work; regulators and state officials are watching the company’s for‑profit transition closely. Technically, Sora is built for “fun” and creation rather than time‑on‑site: OpenAI says it won’t optimize feeds for engagement, will nudge users after long sessions, prioritize showing people you know, and add features (like dynamic emojis) to encourage interaction. Still, the app exposes familiar incentive problems — reinforcement‑learning optimizations, sycophancy from training methods, and addictive feedback loops — and competes with other AI video feeds (e.g., Meta’s Vibes). The launch is small for now, but it marks a significant consumer shift with concrete implications for safety engineering, content provenance, moderation, and whether AI labs can scale creative products without reproducing social media’s harms.
Loading comments...
loading comments...