Google Earth's expanded AI features make it easier to ask it questions (www.theverge.com)

🤖 AI Summary
Google is expanding AI capabilities tied to Google Earth by integrating its Gemini model into the Geospatial Reasoning framework, making it easier to chat with and get multi-model answers from Earth AI systems. For trusted testers, Gemini now links different Earth models — weather forecasts, satellite imagery, population maps and environmental layers — so a single chat query can combine those signals (for example, identifying infrastructure at risk from an oncoming storm or communities vulnerable to dust storms). Testers can also layer their own data with Google’s imagery, population and environment models. An integrated chat pilot that helps users find objects and patterns in imagery (e.g., “find algae blooms”) is being extended to US Google AI Pro and Ultra users with higher, unspecified limits, and will roll out soon to professional/professional-advanced Google Earth subscribers in the US. This matters because it moves geospatial AI from single-model lookups to linked, context-aware reasoning across heterogeneous datasets — a capability that accelerates disaster response, environmental monitoring, infrastructure planning and custom analytics. Technically, the update emphasizes model orchestration (Gemini coordinating Earth AI specialists), user-data fusion with pre-trained geospatial models, and conversational querying over spatial-temporal data. Note: the enhancement applies to Google’s Geospatial Reasoning framework (not a separate “Gemini in Google Earth” product).
Loading comments...
loading comments...