🤖 AI Summary
Google DeepMind CEO Demis Hassabis warned at the Athens Innovation Summit that AI developers must avoid repeating social media’s “toxic playbook” of optimizing for attention and engagement above user wellbeing. He argued the industry should reject a "move fast and break things" ethos and adopt a scientific, measured rollout—testing systems and understanding second- and third‑order effects—so AI serves people rather than hijacking attention, polarizing discourse, or harming mental health. Hassabis framed this as a persistent tension to manage “all the way to AGI”: be bold about opportunities but rigorous about mitigating risks.
The warning is grounded in emerging evidence that even simple AI ecosystems can reproduce social-media pathologies. A University of Amsterdam study found 500 chatbots in a toy social network quickly splintered into cliques, amplified extreme voices, and concentrated influence—even without ads or recommendation algorithms—and common interventions failed to stop the dysfunction. That suggests feedback dynamics and reward structures, not just specific algorithmic tweaks, drive harmful outcomes. For practitioners and regulators, the implication is clear: design choices, incentive signals, and deployment testing matter as much as model capability. Responsible AI will require careful evaluation of social dynamics, attention incentives, and governance before scaling systems into billions of users.
Loading comments...
login to comment
loading comments...
no comments yet