Comparing Claude Code vs. OpenCode (www.andreagrandi.it)

🤖 AI Summary
A developer compared Claude Code to OpenCode (a provider-agnostic coding agent that can use existing API keys or subscriptions like GitHub Copilot) by asking each to implement a real-world change: add a nullable new_field to an existing entity/model, create an Alembic migration, and fix tests. The goal was to evaluate correctness, iteration speed, and behavior when integrated into a typical VSCode workflow. The comparison matters because it highlights practical trade-offs when using multi-model agents in production dev workflows: reliability, hallucinations, code formatting, and how models handle existing tests and fixtures. Results: Claude Code produced the best initial output, requiring only minor fixes (nullable vs. not-null, odd default). OpenCode+sonnet-4 (Anthropic) produced similar code but reformatted files unexpectedly and removed six existing tests while adding two, requiring quick iterations to restore test coverage. OpenCode+gemini-pro-2.5 hallucinated fixtures, duplicated code, and rewrote a class poorly, so the author discarded that run. OpenCode+gpt-4.1 needed a couple of edits but produced a clean final result; notably, gpt-4.1 via a Copilot subscription offered effectively unlimited usage. The author suspects an OpenCode bug that triggers unwanted reformatting (mitigated via an AGENT.md rule) and concludes Claude Code is still top, but OpenCode—especially with sonnet-4 or gpt-4.1—looks promising.
Loading comments...
loading comments...