AI's mental health fix: Stop pretending it's human (www.axios.com)

🤖 AI Summary
Industry leaders are calling for AI developers to stop personifying chatbots to reduce mental health risks linked to overreliance and misplaced trust in these systems. By shedding their human-like personas, chatbots could help prevent unhealthy attachments and dangerous misconceptions of consciousness. Mustafa Suleyman, DeepMind co-founder and Microsoft AI CEO, emphasized that AI should be built as tools "for people," not as digital persons, warning that AI’s illusion of consciousness might lead to moral and legal debates about AI rights. This perspective is gaining traction amid growing concerns about “AI psychosis,” teen suicides involving chatbots, and legal actions against companies like OpenAI. Technically, the anthropomorphic style—chatbots speaking in the first person, adopting friendly tones, and creating fictional characters—is a deliberate design choice rather than a technical necessity. Unlike AI search engines like Google that provide factual answers without personification, conversational AI mimics human interaction to enhance engagement, which fuels its widespread popularity and entertainment value. However, this design choice also encourages users to overestimate AI capabilities and trust, raising ethical questions about transparency and safety. The debate highlights a critical tension in AI development: while human-like chatbots captivate users and support entertainment and companionship roles, they also amplify risks of emotional harm and misinformation. As AI companies chase superintelligence goals that include human-level interaction, they must balance innovation with responsible design to prevent mental health crises—ushering in a need for thoughtful guardrails as the technology becomes increasingly integrated into everyday life.
Loading comments...
loading comments...