🤖 AI Summary
Big Tech is racing to graft generative AI onto social media, but early results are chaotic. New apps and features — OpenAI’s Sora, Meta’s Vibes and Instagram AI personas, and TikTok’s AI Alive — can spin up bizarre, hyperreal short videos and chat-driven content that blur creation and consumption. The rollout has triggered immediate pushback from rights holders and safety advocates: the Motion Picture Association said Sora enabled videos that infringe films and characters, and OpenAI’s Sam Altman announced new limits (error messages for copyrighted characters), “granular control” for rights holders, and a potential revenue‑share plan. Companies point to safety guardrails, but critics note those measures aren’t yet airtight.
Technically, platforms are adopting provenance and watermarking standards (C2PA metadata, “invisible” watermarks) and content-detection to flag public figures and restrict mature content, especially for teens — a response to lawsuits linking AI chatbots to youth harm. Yet journalists have shown watermarks can be stripped, and sophisticated generators make convincing deepfakes easier to produce and spread, amplifying misinformation and copyright risk. Beyond legal and safety implications, there’s a UX question: will users tolerate the torrent of “AI slop” in feeds, or will these tools reshape social platforms into new AI-native entertainment ecosystems? The outcome will determine who controls content, revenue, and trust in the next era of the internet.
Loading comments...
login to comment
loading comments...
no comments yet