Spinning Plates (www.dylanamartin.com)

🤖 AI Summary
In the past six months the author’s dev workflow has been radically accelerated by LLMs—Claude Code is already generating unit tests and filling in large swaths of production-ready work—so daily throughput looks “cracked” with many features and green CI. But that speed comes with a tradeoff: feedback loops have shortened, the role shifts from “builder” to “foreman” who supervises code generation, and deep learning-by-doing is eroding. The pattern is industry-wide: influencers and companies push AI-first stacks, and practices like pasteable prompts or chat-based refactors replace line-by-line problem solving. This matters to AI/ML teams because it changes skill distributions, onboarding, and what counts as engineering craft—raising risks of skill atrophy, shallower mental models, and attention fragmentation that hurt performance on pathological or architectural problems. Technically, the author highlights concrete workflows and mitigations: prefer in-editor tab completion and inline comments (Karpathy’s Nanochat approach) over giant freeform chat prompts for production work, and reserve “no-LLM” time to build intuition by typing code manually. Use LLMs for high-ROI routine tasks (CRUD, migrations, tests, glue) but treat them as power tools, not autopilot—rubber-duck with models rather than outsourcing thought. Protect blocks for deep work, evaluate progress over weeks/months instead of daily throughput, and be intentional which problems you let machines handle. The takeaway for ML teams: design tooling and culture to preserve deep expertise while leveraging LLM leverage, or risk faster output with weaker understanding and potential burnout.
Loading comments...
loading comments...