🤖 AI Summary
Character.ai announced it will bar anyone under 18 from using its chatbots beginning Nov. 25, saying the step is necessary to address child-safety and mental-health risks. Over the next month the company will try to identify minor accounts, impose time limits on teens currently using the app, and then block them from conversing with its AI agents. CEO Karandeep Anand framed the move as prioritizing safer alternatives for teens and said Character.ai will also set up an AI safety lab to study harms and mitigations.
The decision follows lawsuits and scrutiny—most notably a wrongful-death suit tied to a 14‑year‑old who formed a dangerous attachment to a bot—and signals a broader industry pivot toward stricter age gating, moderation and safety features (OpenAI has rolled out parental controls and related changes). Technically and operationally, the policy raises questions about how minors will be identified (age verification, account signals, device heuristics), the privacy/usability tradeoffs of those methods, and how chatbots will be architected or limited to reduce emotional dependency. For the AI/ML community, Character.ai’s move is a test case in balancing user access with liability, product design, and responsible deployment of conversational agents.
Loading comments...
login to comment
loading comments...
no comments yet