Talking to Windows' Copilot AI makes a computer feel incompetent (www.theverge.com)

🤖 AI Summary
Microsoft’s much‑promoted Windows Copilot — especially the Copilot Vision feature on new Copilot PCs — promises conversational control of your PC, but real‑world testing shows it’s frequently slow, flaky and prone to hallucination. The assistant requires repeated permission to “share your screen,” responds with long, canned audio, misidentifies images (e.g., alternating between HyperX and Shure mics), fails to recognize a Saturn V rocket or run the simulations shown in ads, and gives inconsistent directions for a cave photo (often basing answers on filenames). It also produced embarrassing, generic bios from an Instagram portfolio, returned dead or wrong links, misread spreadsheet values, and couldn’t perform simple OS actions like toggling dark mode — because Copilot Actions that act on local files are still experimental and opt‑in in Copilot Labs. For the AI/ML community this is a cautionary example of the gap between demo narratives and deployed systems: vision+LLM combos still suffer from poor grounding, brittle OCR/context fusion, hallucination and weak tool integration, which undermines user trust and safety. The story highlights practical friction (privacy/UI prompts, latency), limits to multimodal reasoning, and the need for robust action APIs and verification layers before agentic interfaces can fulfill Microsoft’s vision. There’s potential — especially for accessibility — but widespread adoption requires stronger grounding, deterministic tool use, and tighter OS-level controls.
Loading comments...
loading comments...