🤖 AI Summary
The GUARD Act, a bipartisan Senate bill introduced in 2025, would impose strict age-verification, transparency, and content-safety requirements on AI chatbots — especially “AI companions” that simulate interpersonal or emotional interaction. It defines covered AI chatbots as systems that accept open-ended natural-language or multimodal input and generate adaptive responses (excluding narrowly scoped bots), requires user accounts for access, and mandates “reasonable” age verification (e.g., government ID or similarly reliable methods, not just self-reported birthdates). Existing accounts must be frozen until age is verified, periodic re-checks required, and services may use third parties but remain liable. Age-verification data must be minimally collected, encrypted, non‑retained beyond necessity, and not sold or transferred.
The bill also amends Title 18 to create a new criminal chapter targeting designers/operators who knowingly or recklessly create chatbots that solicit minors into sexually explicit conduct or encourage suicide, self-harm, or imminent violence — penalties up to $100,000 per offense. Chatbots must disclose non-human status at conversation start and every 30 minutes, cannot impersonate licensed professionals, and must regularly notify users that they’re not a substitute for medical/legal/psychological advice. For AI/ML teams this raises clear engineering and product impacts: mandatory account and verification pipelines, privacy and data-security controls, content-moderation safeguards, documentation/transparency mechanisms, and potential liability and compliance costs that could reshape design choices for consumer-facing conversational agents.
Loading comments...
login to comment
loading comments...
no comments yet