🤖 AI Summary
A new paper by David H. Cropley argues — and mathematically derives — that large language models (LLMs) are structurally capped at a creativity score of 0.25 on a 0–1 scale, roughly the border between amateur (“little-c”) and professional (“Pro-c”) creativity. Cropley models creativity as the product of effectiveness (usefulness/coherence) and novelty (surprise/uniqueness). Because next-token prediction ties token choice to statistical likelihood, effectiveness and novelty trade off in a closed probabilistic system; the product of two inversely related variables peaks when both sit at moderate levels, yielding a hard ceiling. This formal result matches empirical observations that AI outputs typically land around the 40th–50th percentile compared to humans.
The paper highlights why decoding tweaks (temperature, nucleus sampling) only shift that balance within the same constrained space and cannot break the limit: randomness increases novelty but hurts usefulness, and vice versa. Cropley suggests escaping the ceiling requires architectural departures from probability-tethered token generation — e.g., non-token-probability generative processes or hybrid symbolic–neural systems that inject structured novelty. Practically, the finding warns industries that rely on transformative originality (advertising, design, entertainment) that over-reliance on current LLMs risks homogenized, formulaic outputs unless fundamentally different AI architectures are developed.
Loading comments...
login to comment
loading comments...
no comments yet