🤖 AI Summary
Engineering leaders tempted to “just rank” developers are chasing an illusion: there is no single metric that reliably identifies your best or worst engineers. The piece argues — backed by the SPACE framework and long experience — that common signals (commit/PR counts, lines of code, cycle time) are misleading because they conflate style, scope and complexity, hide selection bias (seniors take harder work), and reflect systemic delays (reviews, CI, deploy windows). Lines of code reward verbosity over elegance; cycle time measures waiting, not individual skill; and normalized “productivity scores” sold by vendors often just repackage noisy proxies. Metrics are lagging indicators and invite gaming, knowledge hoarding and morale problems when used to stack-rank people.
Instead of ranking, the author recommends a people-and-context-first approach: use metrics as discussion starters, emphasize team-level delivery and flow (collaboration, dependencies, reactive vs planned work), adopt competency frameworks for growth, and surface qualitative signals via manager reviews, peer input, developer-experience surveys and skip-levels. Tools can make these conversations easier (e.g., collaborator breakdowns), but they won’t replace judgement. For AI/ML teams under pressure to “do more with less” amid rising AI coding tools, the takeaway is clear: measure predictability and outcomes at the team level and coach individuals on competencies, rather than chasing a single, deceptively simple score.
Loading comments...
login to comment
loading comments...
no comments yet