Google Has Your Data. Gemini Barely Uses It (www.shloked.com)

🤖 AI Summary
Google’s Gemini 3 (now powering a chatbot with ~650M monthly users) adopts a deliberately restrained memory design: instead of always-on personalization, it stores a single structured “user_context” document plus a short rolling window of recent conversation turns. user_context is an LLM-generated, typed outline (demographics, interests, relationships, dated events) where each bullet includes a timestamp and a rationale linking back to the source interaction. That makes memories inspectable, deletable (remove the source conversation and derived memories go too), and time-aware for conflict resolution. Architecturally this is simpler than ChatGPT/Claude’s distributed memory modules: one summarization pipeline and one canonical artifact that’s easier to govern and extend (and could plausibly be refreshed from different Google surfaces like Calendar, Docs, etc., if you opt in). Crucially, Gemini defaults to “do not use” user_context: the model must ignore stored data unless the user explicitly triggers personalization with phrases like “based on what you know about me.” Even then, use is limited to the minimal required facts and forbidden from making sensitive inferences. That design prioritizes safety, traceability, and user control at the cost of serendipitous personalization, higher token/context costs (rationales double memory size), and availability only to slower Pro models. For privacy-conscious users and enterprise governance, Gemini’s explicit, timestamped memory is a meaningful advance; for those chasing seamless, proactive personalization, it’s a conscious trade-off that leaves much of Google’s data siloed until users opt in.
Loading comments...
loading comments...