"We will never build a sex robot," says Mustafa Suleyman (www.technologyreview.com)

🤖 AI Summary
Microsoft’s AI chief Mustafa Suleyman is trying to square product competition with a public stance against what he calls “seemingly conscious AI” (SCAI). Last week Microsoft rolled out several Copilot updates intended to make assistants more useful without encouraging users to mistake them for people: a multi‑participant group‑chat mode to keep AI anchored in real social contexts, “Real Talk” which lets users dial how much the model challenges or pushes back (reducing sycophancy), a memory upgrade that retains events and long‑term goals, and Mico, an animated “Clippy‑like” avatar aimed at accessibility and engagement. Suleyman reiterated Microsoft’s boundaries — explicitly refusing to build sex robots or flirtatious companions — and framed Copilot as a team member that should serve human flourishing, not mimic consciousness. For the AI/ML community these choices highlight a design trade‑off between engagement and ethical safety: personalization, memory, and expressive avatars improve usefulness and adoption but risk fostering unhealthy attachment or misleading users about agency. Technically, group chat and adjustable critique modes are interesting interventions for mitigating one‑to‑one overtrust, while memory upgrades raise classic privacy and alignment questions. Suleyman’s stance also shapes industry norms and potential regulation: by rejecting SCAI and rights narratives, Microsoft is pushing for product architectures and guardrails that prioritize accountability, boundary management, and human welfare over anthropomorphic realism.
Loading comments...
loading comments...