Demand for human radiologists is at an all-time high (www.worksinprogress.news)

🤖 AI Summary
AI has made striking benchmark gains in radiology—CheXNet (2017) and later systems from companies like Annalise.ai, Lunit, Aidoc and Qure.ai can flag hundreds of findings quickly (CheXNet was trained on >100,000 chest X‑rays and can classify a scan in under a second), and there are now >700 FDA‑cleared imaging models (about three‑quarters of approved medical AI devices). A few, such as IDx‑DR, are even cleared to operate autonomously. Yet instead of displacing radiologists, demand for human specialists is at an all‑time high: U.S. diagnostic radiology residencies offered a record 1,208 positions in 2025, vacancy rates are up, and average radiologist pay reached ~$520,000 (≈48% higher than 2015). The disconnect comes down to generalization, scope, and systems issues. Most models are narrow—each trained to answer one finding on one image type—and many are validated on limited, single‑site datasets, with performance drops of up to ~20 percentage points out‑of‑sample. Training sets skew toward clear, common cases and underrepresent children, women and minorities; some modalities (e.g., ultrasound) are harder to model. Regulators and payers remain cautious about full automation, and clinical trials historically show assistive AI (e.g., mammography CAD) can raise false positives and provoke clinician overreliance. For the AI/ML community this means priorities should be robust multi‑site validation, broader and more representative datasets, integrated multi‑model orchestration, and careful workflow design to augment—rather than replace—radiologists.
Loading comments...
loading comments...