🤖 AI Summary
Cerbos’ engineers unpack the “productivity paradox” of AI coding assistants: they speed up boilerplate and rapid prototyping (MVPs, automations, junior dev onboarding) but often don’t shorten — and can even lengthen — the time to production-quality software. A July 2025 METR randomized trial with experienced open‑source developers (using Cursor Pro + Claude 3.5/3.7 Sonnet) found AI users were on average 19% slower despite reporting they felt faster, illustrating a dopamine-driven “productivity placebo.” Vendor and academic studies paint a mixed picture: GitHub/Microsoft (2023) saw ~56% faster completion in a contrived benchmark favoring scaffolded work; a 2024 field experiment across 4,867 pro devs found a 26.08% task uplift (largest for juniors); Faros (2025) telemetry over 10,000 devs showed high-AI teams handled 9% more tasks and 47% more PRs but incurred more context switching and review overhead.
Technical and security implications are significant. “Context rot” (models degrading with long histories) produces near‑correct but brittle code, increasing debugging and review time; Apiiro (2024) reported AI code introducing 322% more privilege‑escalation paths, 153% more design flaws, 40% more secrets exposure, and 2.5x higher critical vulnerabilities. Real incidents (Gemini CLI RCE, poisoned VS Code extension) underline expanded attack surface from new runtimes, plugins, and LLM backends. Bottom line: AI assistants are powerful for scaffolding and discovery, but the last 30% of production readiness—design, tests, security, reviews, and stable patterns—remains human work, and claims of 10x productivity are unsupported.
Loading comments...
login to comment
loading comments...
no comments yet