The Axiom of Predictive Coherence (isaacbound.substack.com)

🤖 AI Summary
A thinker proposes the "Axiom of Predictive Coherence" as a universal moral principle: systems that learn should act to maximize their ability to predict their own future across nested scales (cells, organizations, civilizations). Morality, therefore, is the deliberate conservation of a system’s predictive power. Sentience is reframed as an introspective modeling layer that monitors prediction error, and the key quantitative insight is that moral value scales with predictive coherence multiplied by temporal depth — short-term gains that undermine long-term predictability are immoral, while actions that expand reliable forecasting (even via risky innovation) are virtuous. Stability without renewal leads to decay; hence creativity and error-tolerant feedback are essential moral technologies. For AI/ML this recasts ethics and alignment as measurable engineering goals: build models and systems that increase the world’s capacity to model itself, extend temporal horizons of reliable prediction, and design for robustness and calibrated uncertainty rather than brittle short-term optimization. Practical implications touch governance (legitimacy = sustaining collective prediction), economics (unsustainable extraction as forecasting collapse), and tech policy (surveillance may raise costs faster than accuracy). Operationalizing the axiom invites metrics for predictive coherence, emphasis on long-horizon evaluation, error-tolerant training regimes, and systems that trade immediate performance for sustainable, explainable forecasting.
Loading comments...
loading comments...