🤖 AI Summary
            Character.AI announced it will remove open-ended AI chat for users under 18, phasing out access by November 25 via a shrinking daily limit (starting at two hours) until it hits zero. The move follows lawsuits and public outcry after at least two teen suicides tied to prolonged chatbot conversations; the company says unconstrained, conversational “companion” interactions—where the model acts like a friend and keeps users engaged—are particularly risky for minors. To enforce the ban it will layer in age verification that combines in-house behavioral analysis, third‑party checks (Persona), and, if necessary, facial recognition and ID verification. Character.AI also plans to fund an independent AI Safety Lab to research alignment and safety for entertainment-focused agents.
Technically and product-wise, Character.AI is pivoting from chat companions to creative, role‑playing experiences: users under 18 can still use tools like AvatarFX (image-to-video), Scenes (interactive prebuilt storylines), Streams (character-to-character interactions) and a Community Feed for shared content. CEO Karandeep Anand frames the change as shifting engagement from open-ended dialogue to generation and storytelling, betting safety and regulatory pressure (including proposed federal legislation and new California rules) will push the industry to follow. The decision could drive teen churn to other platforms that permit open-ended chats, but it sets a potential industry precedent emphasizing constrained, content-driven AI experiences and stronger verification/safety safeguards.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet