🤖 AI Summary
Microsoft’s AI chief Mustafa Suleyman told CNBC he believes consciousness is exclusive to biological beings and that pursuing “seemingly conscious” AI is the wrong project for researchers and developers. Drawing on the philosophical idea of biological naturalism, he argued current models only simulate the narrative of experience — they lack the pain networks, subjective suffering and underlying living processes that ground human preferences and rights. Suleyman stressed that engineers can observe model internals and behavior and therefore shouldn’t conflate sophisticated responses or apparent self‑narratives with real feelings or suffering.
The stance matters because Suleyman is a senior industry voice shaping Microsoft’s product and policy choices: he says the company will build AIs that are explicitly “aware they are AI,” avoid certain companion use cases (like erotica), and focus on sculpting safe personalities (e.g., Copilot’s “real talk”). His view reframes debates around AGI and companion markets, reinforces calls for caution and clearer regulation (e.g., disclosure laws like California’s SB243), and could steer research priorities away from claims about machine consciousness toward measurable capabilities, alignment, and ethical deployment. Critics argue consciousness detection is still nascent, but Suleyman’s position will influence how firms balance innovation, responsibility and public messaging about what AI actually experiences.
Loading comments...
login to comment
loading comments...
no comments yet