My all-star zoo, or why I hired Linus Torvalds and Rob Pike for my AI team (tarantsov.com)

🤖 AI Summary
The author describes how they transformed a failing Cursor-style AI workflow into a reliable, production-capable AI coding pipeline by engineering an ensemble of specialized Claude Code agents, disciplined docs, and test-first processes. Frustrated by token-limited, context-choking interactions on a complex 200k+ LOC Go codebase, they switched to Claude Code (running Opus via Claude Max) and built a layered system: curated _ai/ summaries, an explicit planning phase (aiplan.txt), strict custom command prompts (/do, /fix-failing-test, etc.), and a looped team of subagents (tech lead, test engineer, implementation engineer, code reviewer, doc writer, librarian). Key practices: always have tests, force agents to ultrathink and document plans, and keep human-written docs isolated from AI-editable files. Technically significant lessons for the AI/ML community: subagents reduce context compaction and forgetting by keeping transient debugging work local, role specialization prevents destructive behaviors (e.g., tests being weakened to make code “pass”), and high-quality prompt engineering plus token budgeting (Opus vs Sonnet) materially improves outcomes. The author even created a Linus-Torvalds–style reviewer persona (and analogous senior-reviewer personas) to enforce maintainability and correctness. The story underscores that success on real-world, logic-heavy code requires system design—agent orchestration, strong test culture, and curated knowledge—more than raw model capability alone.
Loading comments...
loading comments...