🤖 AI Summary
Andrej Karpathy’s throwaway line on the Dwarkesh podcast — that humans “collapse” over life the way models overfit — crystallizes a worrying parallel: children are high-entropy systems (novel, exploratory, unpredictable) while adults regress into repeated patterns, lower learning rates, and stale behaviors. Karpathy frames entropy as the antidote: when novelty drops, both people and models recycle the same thoughts or outputs, which feels like a loss of creativity and adaptability. The author connects this to Shannon’s information theory (information = non-redundant signal) and worries that outsourcing cognition to AI trained on average internet content risks accelerating collapse into mediocrity.
For AI/ML practitioners and curious readers the takeaway is twofold and concrete. Technically, “entropy” maps to diversity and unpredictability in inputs and objectives — things that combat overfitting, maintain effective learning rates, and preserve expressiveness. Practically, this suggests tooling and curricula: high-entropy data augmentation, continual learning with novel tasks, exploration-driven objectives, and regularization that favors novelty. For humans, deliberate novelty sources (reading widely, stand-up comedy, cross-domain experiences) keep personal models from collapsing. The implication is an actionable symmetry: preserving entropy—through dataset diversity, training regimes, and life habits—is essential to sustain both machine intelligence and human creativity.
Loading comments...
login to comment
loading comments...
no comments yet