🤖 AI Summary
Anthropic’s Claude Sonnet 4.5 landed shortly after the author’s September update, and after a month of use the main takeaway is conceptual: “slow is fast.” The author argues attention — not raw model latency — is the scarce resource for coding. Faster, cheaper models like Sonnet 4.5 may feel productive but can drain attention by producing flaky, error-prone output you must constantly fix; higher-quality, slower models (e.g., Opus 4.1 in their experience) free cognitive bandwidth and end up accelerating real progress. They also note diminishing marginal improvements between model releases: Sonnet 4.5 is an incremental gains-over-cost tradeoff rather than a clear SOTA leap.
Practical experiments with Codex (Cloud, Slack, GitHub) reveal limits: Cloud Codex is useful for exploratory queries about a codebase but struggles with complex environment setup and has constrained runtimes that make builds (especially Rust) painfully slow. Slack integration is premature — notifications only, no PR submission or conversational follow-up — while GitHub Codex reviews are surprisingly valuable, catching real bugs and leaving succinct approvals rather than nitpicks. The broader industry trend is subscription pricing and consolidation around a few SOTA providers (OpenAI, Anthropic, Google); the author advises developers to pick a top-tier model, reevaluate every 1–3 months, and spend most attention on language, tools, and product work rather than daily AI news.
Loading comments...
login to comment
loading comments...
no comments yet