🤖 AI Summary
As LLMs make writing code easy, this piece argues code review has grown more important—and many engineers are doing it wrong. The most common mistakes: focusing only on the diff instead of how the change fits into the broader system, leaving dozens of line-level comments (the author recommends limiting reviews to ~5–6 substantive comments and using a single stylistic note instead of repeating one), and critiquing every place you’d have coded differently. Equally important is being explicit about review status: if you don’t want a change merged, leave a blocking review; otherwise approve. For fast-moving feature teams, the author argues most reviews should be approvals (minor improvements OK), while infra or safety‑critical code may warrant more blocking gatekeeping.
Technically, this means reviewers should use a “will this work / is this consistent with the codebase” filter, not a personal-style filter, and bring system-level knowledge to catch missing or duplicated functionality the diff alone won’t reveal. The post also calls out incentive misalignment (e.g., SRE teams blocking feature teams) as a structural cause of excessive blocking. Practical implications for AI-generated PRs: treat them like human PRs but feel free to gatekeep more—LLMs often omit needed code and don’t handle a flood of comments well—humans still provide the broader contextual judgment automated tools lack.
Loading comments...
login to comment
loading comments...
no comments yet