I Don't Want to Code with LLM's (blaines-blog.com)

🤖 AI Summary
A developer publishes a forceful rejection of using LLMs for everyday coding, arguing that the hype around AI-assisted programming overstates benefits and masks real harms. The author points to an academic study where economists and ML experts predicted big wins but found a 19% productivity loss (despite developers’ self-reports of perceived gains), and argues that coordination, not raw typing speed, is the real bottleneck in large teams. They concede LLMs are handy for small, well-defined chores—file conversions, simple data mappings, test scaffolding, or as a quick search—but insist those gains are marginal and no substitute for real tooling or documentation links. Technically, the piece stresses where LLMs fail: they degrade on large contexts and complex system-level reasoning, don’t exhibit the hoped-for emergent logical circuits, and eventually produce errors that prompting can’t fix. The required “human-in-the-loop” review often makes the workflow slower and robs developers of the cognitive process of writing, which the author sees as essential for building understanding. They warn of skill atrophy ("use it or lose it") and a future of opaque, barely-understood codebases maintained by a few. Ultimately the author chooses to avoid LLMs, framing the choice as preserving mastery, correctness, and long-term craft over short-term convenience.
Loading comments...
loading comments...