🤖 AI Summary
“Botsplaining” is the term the author uses for a growing social-media habit: instead of replying directly, someone copies your post into an LLM (e.g., ChatGPT), asks it to craft a contrarian or condescending rebuttal, and pastes that response back as if it were their own. The practice often goes undisclosed, produces the unmistakable, canned LLM style, and can persist even after being asked to stop—people simply keep feeding the thread into the model until the conversation dies or gets blocked. The author, who has repeatedly encountered this across platforms and workplaces, argues it’s disrespectful: when you asked a human, you wanted human judgment, not an AI’s generic voice delivered by a third party.
For the AI/ML community this highlights both social and technical implications. Botsplaining erodes norms of authentic discourse, flattens nuance (LLMs hallucinate or oversimplify), and trains social interactions toward automation rather than clarification or empathy. It also raises policy and product questions: how to detect undisclosed LLM-generated replies, whether platforms should require attribution, and how designers can nudge users to ask clarifying questions or admit uncertainty instead of serving as “middlemen.” The takeaway is a call for etiquette and tooling—disclose when you use a model, reserve AI for when AI answers are wanted, and preserve human-to-human exchange where context, judgment, or respect are at stake.
Loading comments...
login to comment
loading comments...
no comments yet