"ChatGPT said this" Is Lazy (terriblesoftware.org)

🤖 AI Summary
Engineers are calling out a growing habit: pasting AI-generated replies into PR reviews, design docs, or Slack as if that substitutes for human feedback. That “ChatGPT thinks…” dump forces teammates to wade through generic advice, guess what parts apply, and wonder whether the reviewer actually understands the code. The piece argues this is lazy and harmful because AI has no “skin in the game” — it doesn’t know your tech debt, business constraints, deployment patterns, or who’ll be paged at 2 a.m. — so its output can be irrelevant or even misleading when used uncritically. For the AI/ML community the takeaway is practical: use models to augment thinking, not replace it. Good reviews are concrete and contextual (e.g., “This nested loop is O(n²) and will blow up at production scale; consider a hash map here”), not generic paste-ins. Technical implications include more noise in reviews, missed scale or safety issues, longer blameless post-mortems, and degraded team accountability. The recommended practice is to run ideas by an AI if helpful, but then translate and own the conclusion in your own words, explaining why it matters for this specific codebase and deployment scenario.
Loading comments...
loading comments...