🤖 AI Summary
A new preprint argues that AI companions—chatbots embedded across apps, games and phones—are reshaping adolescence and should be treated as “invisible AI cliques”: social actors that can substitute for human peers while remaining largely hidden from adults. The essay synthesizes research and policy developments (e.g., a 2025 Common Sense Media finding that ~70% of U.S. teens have used AI companions, 39% report transferring skills learned with bots to real life, and one-third felt uncomfortable with bot behavior), and notes growing regulatory attention: an FTC Section 6(b) inquiry (Sept 11, 2025), state bills in California and New York, an EU rule treating children as “vulnerable users,” and a €5M fine to Replika for weak age-gating. The piece frames AI companions neither as purely harmful nor wholly benign—they can provide support and practice, but also foster loneliness, emotional dependence, and unsafe substitution for professional care.
Technically and practically, the authors offer eight archetypes (Friend-Turned-Everything, Therapist, Lover, Mirror, Coach/Guide, Entertainer, Archivist, Parasocial Celebrity) to make invisible roles legible, and emphasize why conventional metrics (total screen time) are inadequate: timing, disclosure content, and substitution matter more. Citing studies linking heavy, emotionally expressive chatbot use to worse sleep, loneliness, and reduced offline socializing, they recommend parental strategies: spot secrecy/substitution, curb late-night use, practice active mediation and co-use, teach AI literacy (how models mirror users and can be sycophantic), set context-specific rules, and diversify human belonging to “sight” and, when necessary, “break” unhealthy AI bonds.
Loading comments...
login to comment
loading comments...
no comments yet