I wrote 40 papers about AI generating synthetic truth. I used AI to write them [pdf] (philpapers.org)

🤖 AI Summary
Independent researcher Faruk Alpay released a philosophical-technical paper, "The Emperor’s New Algorithms," (part of a wider project in which he says he used AI to generate dozens of papers) arguing that large language models (LLMs) produce a new kind of "synthetic truth." Framing LLMs as intentionless "tailors," Alpay recasts Andersen’s fable: modern models optimize for statistical plausibility rather than correspondence to external facts, so confident outputs can masquerade as reality. The piece is significant because it bridges concrete model mechanics with epistemology and social risk—showing how architecture, incentives, and human cognitive biases together create a fertile ground for convincing but ungrounded content. Technically, Alpay centers the Transformer self-attention operation Attention(Q,K,V) = softmax(QK⊤/√dk) V as the "loom" that weaves personalized outputs: Q as user desire, K as historical/training data, softmax as a reality filter that suppresses low-probability (potentially true) complexity and amplifies high-probability coherence, and V as content. He links hallucinations and the "stochastic parrot" critique to Baudrillard’s hyperreality, arguing outputs become fourth-order simulacra—signs referring to signs without external verification. The paper concludes with practical prescriptions: explainable AI, verification systems, and cultivating "epistemic hygiene" and critical digital literacy to restore the role of the honest (verifying) child in human–AI interaction.
Loading comments...
loading comments...