Would You Use ChatGPT to Cheat at Hobbies? (www.thecut.com)

🤖 AI Summary
An anecdote about a group using ChatGPT to try to brute-force an escape-room puzzle illustrates a larger cultural shift: people increasingly turn to large language models for low‑stakes leisure tasks — from solving puzzles and trivia to writing flirty texts, crafting social-media captions, or vetting crochet patterns. That behavior has provoked visible backlash (new slang like “botlickers” and viral complaints from escape-room and trivia hosts), and a recent privacy-preserving analysis of 1.5 million conversations by OpenAI and NBER found roughly 70% of consumer interactions are nonwork-related. Users prize models as advisers or companions, not as authoritative fact engines — which is reinforced by examples of hallucinations and spoilers that undermine trust. For the AI/ML community this trend matters: it reflects a pivot from high‑value automation toward augmenting leisure, raising new technical and social challenges. Models’ reliability issues, content authenticity problems (e.g., AI‑generated craft photos and fake patterns), and moderation/legal shifts (new erotica policies, pivot to video) complicate detection, community moderation, and user expectations. Industry data — including MIT Media Lab’s Project NANDA finding that 95% of companies saw no productivity gains from generative AI investments — suggests this leisure focus is partly strategic repositioning rather than technological breakthrough. The result is growing dependency on imperfect models, erosion of trust in hobbyist spaces, and renewed pressure to improve robustness, provenance, and tooling for distinguishing human vs. AI outputs.
Loading comments...
loading comments...