Advancing theoretical computer science with AlphaEvolve (research.google)

🤖 AI Summary
Google DeepMind researchers announced AlphaEvolve, an LLM-driven coding agent that automatically evolves and verifies finite combinatorial structures to produce new theorems in complexity theory. Using an iterative population-and-feedback loop where an LLM mutates and recombines code snippets that generate candidate gadgets and graphs, AlphaEvolve found a 19-variable, heavily weighted gadget (weights up to 1,429×) that tightens the NP-hardness-of-approximation bound for MAX-4-CUT from 0.9883 to 0.987. It also discovered larger extremal Ramanujan graphs (up to 163 nodes) that raise lower bounds on average-case hardness for certifying MAX-2-CUT and maximum independent set; combined with recent algorithms these results nearly match upper bounds to within the third decimal. Technically, the work leverages “lifting”: improving a finite structure inside a standard proof framework yields stronger universal (∀n) hardness theorems, so AlphaEvolve searches finite gadgets rather than whole proofs. Crucially, every discovered structure was checked by brute-force verification, and system-level branch-and-bound optimizations gave a reported ~10,000× speedup in verification, enabling exploration of much larger search spaces. The paper shows LLMs can produce verifiable, nontrivial combinatorial objects (not just informal proofs), shifting the bottleneck toward scalable verification and suggesting AI can be a productive, rigor-compatible collaborator in theoretical computer science.
Loading comments...
loading comments...