Blinked in a photo? No problem – Google Photos can now fix that, and lots more, using AI (www.techradar.com)

🤖 AI Summary
Google Photos now uses Google’s Gemini Nano Banana image model to offer highly personalized face edits: you can ask the app to open closed eyes, remove sunglasses, or convert a forced grimace into a real smile, and it composes those fixes using other photos of the same people from your library. The update also adds Gemini-powered “Ask” functionality to interrogate and surface insights about your photos, plus AI-generated style templates that suggest edits tailored to your photo habits (e.g., “cartoon with my dog” or “doodle my name in the sand”), removing the need to craft prompts or learn manual retouching tools. Technically, edits are produced by leveraging your Google Photos “face groups” — clusters you’ve already approved — so the model draws on internal examples of each person’s appearance and habits to create realistic, context-aware composites rather than generic or hallucinated faces. That improves fidelity and UX for casual users, but raises privacy and governance questions: Google says the process stays within your account (not training external models), yet using your photo history as a reference is a meaningful tradeoff. For the AI/ML community this marks a notable shift toward personalized, privacy-scoped generative features in consumer apps, highlighting both new UX possibilities and the need for clear safeguards around identity, consent, and auditability.
Loading comments...
loading comments...