🤖 AI Summary
Researcher Zixi Li has posted two arXiv papers (2509.05550 and 2509.13364) and accompanying GitHub code claiming to have “spectacularly” solved the ARC Prize 2025 challenge (Arc2) — reporting 100% accuracy with a tiny, fast model of roughly 6 million parameters. According to the claims, Li produced two distinct, non-attention-based solutions (one described as a tree/AST-oriented approach) that generalize to hierarchical and AST-style reasoning tasks. Both solutions are open-sourced and the second paper builds on the first, positioning the methods as a potential new paradigm for neurosymbolic reasoning, program-synthesis, and structured-NLP problems (e.g., dependency parsing and AST understanding).
If validated, these results would be highly significant: they imply strong, sample-efficient generalization on novel-reasoning benchmarks with dramatically smaller models and without standard transformer attention, which could reshape architectures and tooling for program synthesis and symbolic–neural hybrid systems. That said, the claim is extraordinary and community verification is essential — replication from the repos should be straightforward and is urgently needed. The story also highlights gaps in paper discovery and review: surprising breakthroughs can remain quiet on arXiv until reproduced and widely discussed.
Loading comments...
login to comment
loading comments...
no comments yet