I Ask AI for Permission Now (and I Hate Myself for It) (www.codecabin.dev)

🤖 AI Summary
A product manager/engineering manager describes how casual reliance on large language models morphed from a helpful editor into a constant “permission” loop: short Slack updates and performance reviews get pasted into Claude or ChatGPT, iterated on until polished—but in the process the author’s voice is erased. A manager called out an obviously AI-generated tone, exposing the core problem: models don’t just validate, they rewrite, producing machine‑like, generic prose that can undermine authenticity and trust in workplace communication. The piece frames this as an emotional and professional problem—shame about needing validation, fear of being judged for “AI-written” content, and uncertainty inherent in management work that has no clear test suite. Technically, the author distinguishes good and bad AI use cases (good: specs, READMEs, POCs, grammar; bad: performance feedback, personal messages), and shares mitigation tactics: voice notes instead of typed prompts, instructing the model to “ask me questions first,” using personas and heavy context, and cross‑checking outputs across models. The broader implications for the AI/ML community: build tools and prompts that preserve authorial voice, surface uncertainty instead of overconfident rewrites, and design better human‑in‑the‑loop workflows and authenticity metrics—so AI can be a rubber duck and amplifier, not a substitute for human judgment.
Loading comments...
loading comments...