Learning is optimized when we fail 15% of the time (neurosciencenews.com)

🤖 AI Summary
Researchers led by Robert Wilson at the University of Arizona mathematically derived — and then validated with machine-learning simulations — an “85% Rule” for training: for a broad class of stochastic gradient–descent (SGD) based learners on binary classification tasks, learning is fastest when the trainee errs about 15% of the time (≈15.87% by theory), i.e., maintains roughly 85% accuracy. The team tested this with simple two-choice tasks (pattern classification, odd/even or low/high digit labels) and found that networks and biologically plausible models both improved most quickly when examples were tuned so difficulty produced that target error rate; they also found supporting patterns in prior animal learning studies. This finding matters because it puts the pedagogical intuition of a “sweet spot” for challenge on a quantitative footing and suggests practical controls for curriculum learning: adaptive difficulty schedulers or sampling strategies that keep accuracy near 85% should maximize learning speed and sample efficiency for SGD-like learners. The authors caution the result is derived for binary decisions and SGD-style updates, so it’s not a blanket prescription for all cognitive or classroom settings (e.g., complex multi-class tasks or higher-level conceptual learning). Still, it offers a compact, testable guideline for designing training regimes in AI, neuroscience experiments, and perceptual-skill education (e.g., radiology), and invites extensions to more complex learning scenarios.
Loading comments...
loading comments...