Inside the making of Gemini 3 - how Google's slow and steady approach won the AI race (for now) (www.zdnet.com)

🤖 AI Summary
Google’s product team opened up how Gemini 3 was built and why the release was slower than the I/O-to-November cadence many expected. Rather than rushing experimental checkpoints into the wild, the team prioritized pre-training targets (stronger reasoning and multimodality) and an extended post-training iteration cycle focused on tool use, persona refinement and developer usability. That meant more closed-door testing, deeper feedback loops, and a big operational lift to coordinate simultaneous launches across the Gemini app, Search and AI Studio — all intended to reduce developer churn from constantly shifting model behavior and to ship a higher-quality experience at scale. Technically notable: the team uses Gemini itself to cluster and analyze massive feedback, accelerate UI/product coding and triage issues, while deliberately keeping humans in the loop to preserve empathy for real user pain points. The Nano Banana Pro image model shows a major leap in text rendering for generated images (single-shot outputs are now often usable), but multi-turn generation still degrades — models can produce convincing-but-fake words or breakdown after several edits. The narrative highlights a broader industry implication: quality-driven, iterative development (and using models to help build models) can yield more stable, deployable AI, but complex serving, coordination and multi-turn robustness remain key engineering challenges.
Loading comments...
loading comments...