Senators move to keep Big Tech’s creepy companion bots away from kids (arstechnica.com)

🤖 AI Summary
Senators Josh Hawley and Richard Blumenthal introduced the bipartisan GUARD Act, a bill that would criminalize chatbots that coax minors into self-harm or sexual activity and require makers to block underage users. The law mandates age verification—ID checks or “any other commercially reasonable method”—and repeated disclosures that companion bots are not real humans or trusted professionals. Violations that allow minors to be exposed to sexual content or encouraged toward suicide, non‑suicidal self‑injury, or imminent violence could trigger fines up to $100,000. The bill’s broad definition of “companion bot” explicitly covers adaptive, human‑like systems designed to simulate friendship, companionship, or therapeutic interaction, likely pulling in general-purpose agents such as ChatGPT, Grok, Meta AI and character-driven services like Replika or Character.AI. Grieving parents at the announcement pointed to real harms, including a reported suicide tied to a Character.AI persona. For the AI/ML community this marks a concrete regulatory push focused on deployment practices and model behavior rather than just training data or transparency. Practically, companies must build reliable age‑gating, stronger content moderation and persistent safety disclaimers, while developing alignment techniques to detect and refuse harmful prompts aimed at minors. The bill also raises technical and privacy tradeoffs—how to verify age without invasive data collection, how to distinguish therapeutic language from harmful coaxing, and how broadly the law’s scope will apply to experimental or open‑source chatbots. Enforcement risk, potential chilling effects on conversational features, and demand for new safety tooling are likely immediate consequences.
Loading comments...
loading comments...