LLMs and Creation Outside of Time (balajmarius.com)

🤖 AI Summary
An essayist argument: Large language models don’t “create” the way humans do because human creativity is inseparable from temporal experience — memory, history, and a projected future. LLMs, by contrast, sit outside that timeline: they hold a vast, unprocessed archive and generate by sampling and recombining it. That yields outputs that feel uncanny and hauntological — style without authorship, echoes without an originating act. The piece uses vaporwave and the “abandoned mall” metaphor to show how model outputs can sound like polished residues of culture rather than transformed, forward-facing works: recombination without intention or learning is aesthetically suggestive but not genuinely novel. For the AI/ML community this reframing matters technically and culturally. It highlights limitations of current architectures as “suspended past” engines: sampling-based generation lacks lived memory, desire, or projective goals, so novelty tends to be derivative. Implications include rethinking evaluation of generativity, and prioritizing directions that restore temporality and agency — e.g., mechanisms for persistent memory, continual learning, goal-directed agents, causal/world models, or human-in-the-loop projects that embed interpretation and transformation. The warning is broader than replacement fears: if society accepts spectral, archival creativity as sufficient, we risk cultural stagnation. Engineers and researchers should therefore treat creative AI not just as a generator of plausible pastiche, but as a component that must be integrated into practices that create real historical depth and forward movement.
Loading comments...
loading comments...