🤖 AI Summary
LLMs can seem paradoxically brilliant and brittle at programming because “being good at programming” actually splits into two different skillsets: deep conceptual reasoning about algorithms, architectures and tricky bugs, versus the ability to ingest, recall and recombine large amounts of API docs, tutorials and example code. The author argues—and echoes Andrej Karpathy’s “vibe coding” observations—that current LLMs overwhelmingly excel at the latter. They produce boilerplate, stitch API calls, translate code between languages and libraries, and rapidly generate examples from documentation, often saving huge time for developers unfamiliar with a stack. That’s why LLMs have clear product–market fit in many coding workflows even if they aren’t autonomous programmers.
Technically, LLMs do well at pattern-matching and synthesizing existing code (e.g., translating pandas/matplotlib to polars/plotnine or generating PyTorch snippets), but they fail at tasks requiring novel algorithm design, deep debugging across interacting components, or architectural tradeoffs. The author recounts iterative, circular debugging with a biotite superimpose() call as an example of when the model “flails” rather than reasons. Practical implication: treat LLMs as an advanced, searchable code oracle (like a supercharged Stack Overflow) that speeds routine tasks but requires human oversight for design, correctness and subtle bug-hunting. Eventually models might acquire both skillsets, but today they’re powerful assistants—not replacements—for experienced engineers (the author used Claude Sonnet 4.5).
Loading comments...
login to comment
loading comments...
no comments yet