An Age of AI Enlightenment (xiangfu.co)

🤖 AI Summary
The piece argues that current AI training—pretraining, supervised fine-tuning, and reward-driven RL—optimizes models to fit expectations and suppress anomalies, which makes them excellent at well-defined tasks but poor at discovery. Discovery instead requires creating and detecting deviations from a precise framework of expectation, and that demands RL objectives that explicitly reward verifiable anomalies. The essay points out that defining “meaningful anomaly” is hard in domains like math or code, but much easier in physical science where extraordinary properties (e.g., superconductivity above liquid nitrogen temperature, strong magnets without rare earths, ultra-light strong alloys) are directly testable in the lab. Technically, the author proposes building multi-level frameworks of expectation across scales—electronic, atomic, continuum, device—so weak signals can be amplified by cross-scale consistency. AI systems could maintain parallel models (quantum simulations, continuum models, experimental plans) and evaluate hypotheses against experiments, enabling verifiable anomaly-driven RL. For the AI/ML community this implies a shift from mode-seeking objectives toward exploration-rewarded training, tighter integration with experimental pipelines, and new evaluation metrics for “discoveries.” The payoff could be accelerated material and scientific breakthroughs, and potentially paradigm-shifting insights that reshape both science and how we train AI.
Loading comments...
loading comments...