Predicting the Future: The Supergroup of AI, Humans, Hedgehogs and Foxes (www.newsweek.com)

🤖 AI Summary
Philip Tetlock’s decades-long forecasting research—started with large prediction tournaments and embodied in the Good Judgment Project and IARPA challenges—distills why some people (“foxes”) consistently outperform experts (“hedgehogs”). Foxes stitch diverse evidence, apply symmetric Bayesian updating, resist System 1 intuitions (availability, affect, substitution, narrative heuristics), use counterfactuals and rapid feedback where possible, and internally aggregate viewpoints. Aggregating many independent forecasts (the “wisdom of crowds”) further boosts accuracy; hedgehogs, tied to a single grand theory, fare much worse and can even be outperformed by randomness over long horizons. Tetlock’s recent conclusion — “it is absolutely crucial to integrate LLMs into almost all lines of inquiry” — has clear technical implications for AI/ML. LLMs can operationalize fox-like behaviors: generate alternative hypotheses and counterfactuals, synthesize heterogeneous data, perform probabilistic/Bayesian updates, and produce calibrated probability estimates that humans can then vet. For practitioners this implies building human+LLM forecasting pipelines, ensemble and market-based aggregation, continuous-feedback training for calibration in “learning-friendly” domains, and careful design for learning-unfriendly areas (e.g., geopolitics). The result: better decision-support systems that combine human judgment, crowd aggregation, and LLM-driven reasoning to improve real-world forecasts.
Loading comments...
loading comments...