I Use LLMs to Write the Majority of My Code (boredhacking.com)

🤖 AI Summary
A senior staff engineer describes how LLMs now write the majority of his code and outlines practical workflows showing why that matters for AI/ML practitioners: by treating models as “junior engineers” he dramatically increases velocity, offloads routine implementation, and spends his time on higher‑leverage architecture and ambiguous problems. He reports a real toolchain—Cursor (60–70%), Claude Code (20–30%), ChatGPT (10–20%), plus earlier GitHub Copilot—and experiments with Gemini, OpenAI O3/GPT‑5 and MCPs like context7 and Figma Dev Mode to ground outputs. Key wins include rapid front‑end work (Tailwind, screenshots and Figma inputs), fast ramp‑up into new stacks (Pydantic, SQLAlchemy, C++/OpenGL), and improved debugging when using “reasoning” models to interpret stack traces and surface framework quirks. The post is a practical playbook for the community: always provide rich context (files, types, logs, images), break tasks into junior‑engineer–style prompts, pick models intentionally (heavy thinkers for tricky bugs, lighter for boilerplate), stage diffs and reset chats, and treat generations as disposable drafts. It flags remaining risks—hallucinations, stale dependency suggestions, and skill atrophy—while emphasizing mitigation patterns (prompt structure, model swapping, one‑shot exploration to scope features, and human pause-and‑ponder). For teams and tool builders, the account highlights where LLMs already change developer workflows and where improved grounding, tool‑calling, and model selection will drive the next productivity gains.
Loading comments...
loading comments...