🤖 AI Summary
At the Agentic Engineering Sessions, Google dev-tools lead Addy Osmani outlined what he calls the "70% problem": modern AI coding tools can reliably produce roughly 70% of an application’s scaffolding and obvious features, but the remaining 30%—edge cases, production integration, security, debugging and maintainability—still requires substantial human engineering. Osmani backed this with Google-internal adoption data (over 30% of code is now AI-generated) and sentiment trends showing declining trust: favorable views slid from ~70% to ~60% in two years, while about 30% of developers report little to no trust in AI-generated code.
The technical and organizational implications are concrete: AI accelerates greenfield development but struggles with legacy systems and technical debt, and repeated AI-driven fixes can cascade into hidden regressions across multiple files. Code review is emerging as a new bottleneck as teams shift responsibility onto humans to validate generated code, ensure secure handling of credentials and APIs, and preserve architecture and observability. Osmani’s prescription is pragmatic—keep humans in control: treat AI as a productivity and learning aid, maintain rigorous reviews and tests, and prioritize understanding the code AI produces to avoid brittle, insecure, or unmaintainable systems.
Loading comments...
login to comment
loading comments...
no comments yet