What we lose when we surrender care to algorithms (www.theguardian.com)

🤖 AI Summary
Clinics are rapidly adopting AI tools—real‑time scribes that transcribe visits, highlight keywords, suggest diagnoses and billing codes, and consumer chatbots patients use to prep or self‑triage. The article opens with a vignette of an AI scribe that accurately summarized a 70‑year‑old patient’s symptoms but missed her vocal catch, fear about stairs, and traumatic history—nonverbal and contextual cues that shaped true clinical need. Adoption is widespread: two‑thirds of US physicians (a 78% year‑over‑year jump) and 86% of health systems reported using AI in 2024. Models are also demonstrating striking technical performance—AIs reading radiology, flagging sepsis, detecting skin cancer from phone photos, and one system (OpenEvidence) scoring 100% on the USMLE—fueling arguments that AI can rival or augment clinical reasoning. The piece warns this technical power risks hollowing care when deployed in profit‑driven, efficiency‑obsessed systems. AI lowers documentation burdens but amplifies automation bias, flattens patient narratives (patients learn phrasing from chatbots), and channels encounters toward measurable, billable actions rather than social determinants or relational understanding. Framed as the next stage of the “medical gaze,” algorithmic tools can commodify suffering—optimizing diagnoses and billing while eroding listening, critical inquiry and trust. The result: better‑performing models on narrow tasks but a potential loss of the tacit, affective, and contextual information essential to safe, just, person‑centered medicine.
Loading comments...
loading comments...