🤖 AI Summary
A technology critic tested Cora, a web app that connects to Gmail, reads recent messages, archives the bulk, and produces twice-daily briefings and draft replies. After onboarding (Cora scanned 200 emails and inferred basic traits), the author cleared an 829-message backlog and found Cora accurately archived most low-value mail and left a handful of items needing attention; it suggested quick replies for “layer-two” messages but deliberately avoided the deeper, context-heavy “layer-three” emails that require nuanced judgment. The experiment highlighted both immediate productivity gains and Cora’s conservative behavior: useful triage and drafting, but no reliable autonomy on high-stakes, relationship-sensitive requests.
For AI/ML practitioners the story crystallizes why inbox automation remains unsolved: these tools split into a control program that manipulates mail and commercial LLMs (e.g., Google’s Gemini Flash) that do the “intelligence” via prompts. That architecture is flexible and cost-effective, letting users tweak prompts to change filtering, but it depends on models that lack a user’s tacit knowledge—the unstated preferences, relationships, and context Michael Polanyi described—making robust, trustworthy auto-replies infeasible today. Emerging designs (e.g., OrchestrateInbox) move beyond message-level ops toward chat-driven briefings and decision orchestration, suggesting future directions: richer personal memory, safer personalization, and interfaces that augment rather than replace human judgment.
Loading comments...
login to comment
loading comments...
no comments yet