🤖 AI Summary
This piece argues that AI alone won’t rescue drug discovery unless it’s paired with better data and regulatory change. The author revisits Eroom’s Law — drug approvals per billion dollars of R&D have roughly halved every nine years — and warns against two self-delusions: equating more hypotheses with progress, and over-relying on model-system data that don’t capture human physiology. Current AI wins in biology (e.g., AlphaFold, antibody design) succeed where inputs, outputs and datasets are dense and well-defined, but these accelerate workflows we already could do. Most programs still fail in humans (~10% succeed) because preclinical models lack predictive validity; clinical development costs exceed $1B, so quality of hypotheses—not quantity—is the bottleneck.
Technically, the article stresses we need causal, dynamic, multi-scale in-human data (not just static multi-omics) to train models that truly forecast clinical outcomes. Human genetics is a concrete example: target mechanisms supported by human variants roughly double–triple clinical success rates. Practical implications: pursue regulatory and trial-reform (the “Clinical Trial Abundance” idea) to make interventional, in-human data cheaper and faster, and treat AI as a complement to, not a substitute for, better clinical data. Otherwise AI risks “clogging” pipelines with mid-quality leads and amplifying past mistakes from reductionist approaches.
Loading comments...
login to comment
loading comments...
no comments yet