🤖 AI Summary
A new theoretical analysis in the Journal of Creative Behaviour by David H. Cropley argues that current large language models (LLMs) are mathematically limited to “amateur”‑level creativity. Using the standard product definition of creativity (effectiveness × novelty) and modeling LLMs as probabilistic next‑token predictors, Cropley shows an inherent trade‑off: high-probability tokens maximize effectiveness but minimize novelty, while low‑probability tokens increase novelty at the cost of coherence. When expressed formally, that inverse relationship yields a maximum creativity score of 0.25 (on a 0–1 scale), achieved only when effectiveness and novelty are balanced at moderate levels. Cropley links that ceiling to the “Four C” creativity model and empirical tests that place AI outputs around the 40th–50th percentile of human work—roughly the boundary between everyday “little‑c” and “Pro‑c” creativity.
The paper’s significance lies in reframing generative AI not as a budding Big‑C creator but as a powerful mimic that will struggle to autonomously generate transformative, expert‑level ideas under current architectures. Key technical caveats include the use of a linear approximation for novelty, the assumption of standard decoding modes (greedy/simple sampling), and exclusion of human‑in‑the‑loop editing. Cropley notes possible avenues to nudge the ceiling—temperature tuning, reinforcement‑learning adjustments or new architectures that break strict reliance on past statistical patterns—but argues a paradigm shift would be needed for LLMs to rival top human creativity.
Loading comments...
login to comment
loading comments...
no comments yet