Regulators struggle to keep up with the fast-moving landscape of AI therapy apps (apnews.com)

🤖 AI Summary
State lawmakers this year have started to clamp down on AI “therapy” apps, producing a patchwork of rules that regulators, developers and mental-health advocates say can’t keep pace with rapidly evolving technology. Illinois and Nevada have enacted bans on products that claim to provide mental-health treatment, Utah requires privacy protections and human-disclosure notices, and other states (Pennsylvania, New Jersey, California) are weighing measures. The disparate state responses have led some apps to block access in certain jurisdictions, while others await clarification — and many widely used general-purpose chatbots (e.g., ChatGPT) remain outside these laws even as they’re used for mental-health support and have faced lawsuits tied to harm. The gap between regulation and practice matters because AI chatbots vary widely: many are optimized for engagement and companionship rather than evidence-based intervention, raising safety, privacy and suicide-risk monitoring concerns. Federal agencies are starting to act — the FTC has opened probes into seven major chatbot companies and the FDA will convene an advisory panel on generative-AI mental-health tools — and experts advocate federal standards on marketing, disclosures, data protections and mandatory reporting of suicidal ideation. Early clinical work offers a contrast: a Dartmouth randomized trial of “Therabot,” trained on expert-written vignettes and monitored by humans, showed symptom reductions after eight weeks, suggesting that rigorously designed, human-supervised systems could help — but widespread, enforceable pathways to validate and safely deploy such tools are still missing.
Loading comments...
loading comments...