🤖 AI Summary
A senior engineer reports that for a recent infrastructure project they reached “north of 90%” AI‑written code: a Go service (OpenAPI‑compatible REST API) that sends/receives email, autogenerated Python and TypeScript SDKs, about 40k lines including Go, YAML, Pulumi and SDK glue. Their workflow was OpenAPI‑first, heavy use of raw SQL (including migrations) instead of an ORM, and PR‑sized agent outputs. They relied on Claude Code for debugging and environment/tool access and Codex for post‑PR code review, using two agent patterns—prompt until close and tighter lockstep edits. The AI helped rapidly explore multiple OpenAPI implementations, write solid raw SQL (MERGE/WITH), scaffold test infrastructure (testcontainers + DB cloning), and speed up AWS/Pulumi setup, but the author still reviews every line and steers architecture.
The piece is significant because it shows practical, production‑grade systems where generative agents do most of the typing—accelerating prototyping, refactoring and infra work—while exposing concrete risks: hallucinated or duplicate implementations, inappropriate abstractions, dependency rot, swallowed errors, poor concurrency decisions (goroutines/threading issues), and weak rate‑limiter implementations (no jitter, bad storage). The takeaway for ML/SE teams: agent workflows can unlock new productivity and let AI produce complex SQL and infra, but they don’t remove the need for skilled engineers to design, review, and enforce observability, invariants and security; adoption will grow quickly, but human judgment remains the gating factor.
Loading comments...
login to comment
loading comments...
no comments yet