What works well and doesn't with AI coding agents in October 2025 (mdelapenya.xyz)

🤖 AI Summary
A maintainer of Testcontainers for Go documented a hands-on refactoring experiment where they migrated 60 modules from testcontainers.GenericContainer() to the new testcontainers.Run() API. After doing 19 modules manually in 7 days to learn patterns and edge cases, they used Claude Code to finish the remaining 41 modules in 3 days. The human-first approach produced reference implementations and a concise migration plan that the agent followed: run per-module commits, run tests (make pre-commit test), and create PRs. This shows a practical human+AI workflow for large, real-world codebase changes — humans capture idioms and edge cases, then delegate repetitive, well-specified work to an agent. Key technical takeaways that made the automation safe and reliable: always build a moduleOpts slice of type []testcontainers.ContainerCustomizer and call testcontainers.Run(ctx, img, moduleOpts...), process custom Option types first to populate internal state, then apply defaults, conditional options (e.g., TLS), then append user opts (order matters). Prefer built-in functional options (WithEnv, WithExposedPorts, WithFiles, WithWaitStrategy) instead of CustomizeRequestOption; avoid WithImage; return concrete container structs not interfaces; initialize the container variable before error checks; and inspect env vars post-run with strings.CutPrefix for early exits. The experiment highlights that well-crafted prompts, explicit invariants (naming, error formats), and a persistent plan file let code agents accelerate large refactors while preserving tests and backwards compatibility — but they still need human oversight for edge cases and policy constraints.
Loading comments...
loading comments...