The Little Theorems (blog.computationalcomplexity.org)

🤖 AI Summary
Last week Purdue philosopher Eamon Duede argued that easier writing via AI will let researchers publish many “little” results they’d previously leave unwritten. The author pushes back: that swarm might be useful. Small theorems in complexity theory—examples cited include the equivalence of finding an S_2^p witness with TFNPNP and the folklore identity PPP = P#P (proof by a simple binary‑search trick, historically unattributed until mentioned by Toda)—often never make it into conferences or journals. Historically minor results have circulated privately (Fermat’s notes, Euler’s later publication), and traditional short‑paper venues like Information Processing Letters have declined, leaving arXiv as the main outlet—though AI‑assisted writeups are still culturally frowned upon. Significance for AI/ML and the research ecosystem: generative tools could lower the friction of formalizing small but potentially useful lemmas, improving reproducibility and preserving provenance that would otherwise be lost. That creates tradeoffs—an influx of low‑value papers risks cluttering literature and citation chains, while missing attributions persist for folklore proofs. The piece suggests embracing AI for writing up marginal results and then using automated methods (perhaps the same AIs) to curate, verify, and integrate them into the corpus, prompting a rethink of norms around authorship, review standards, and the archival role of preprint servers.
Loading comments...
loading comments...