🤖 AI Summary
Microsoft AI chief Mustafa Suleyman said artificial superintelligence — an AI that reasons far beyond humans — should be treated as an "anti-goal," arguing on the Silicon Valley Girl Podcast that such an outcome "doesn't feel like a positive vision of the future" and would be very hard to contain or align with human values. A DeepMind cofounder now at Microsoft, Suleyman said his team is instead pursuing a "humanist superintelligence" designed to support human interests, and warned against attributing consciousness or moral status to current systems ("they're just simulating high-quality conversation").
The remark matters because it pushes a major industry player toward prioritizing safety, alignment and human-centered design over a race to build increasingly powerful, potentially uncontrollable systems. Technically, Suleyman’s stance emphasizes research priorities such as value alignment, interpretability, containment mechanisms and governance frameworks rather than raw capability scaling. His view contrasts with proponents like OpenAI’s Sam Altman and DeepMind’s Demis Hassabis, who have spoken publicly about AGI and superintelligence timelines, and with skeptics like Yann LeCun who see longer horizons. If adopted broadly, Microsoft’s position could shift investments and policy debates toward cautious deployment, stronger safety tooling, and clearer norms about anthropomorphizing AI.
Loading comments...
login to comment
loading comments...
no comments yet