A tiny recursive reasoning model achieves 45% on ARC-AGI-1 and 8% on ARC-AGI-2 (alexiajm.github.io)

🤖 AI Summary
Researcher introduces Tiny Recursion Model (TRM), a 7M-parameter recursive reasoning network that attains 45% on ARC-AGI-1 and 8% on ARC-AGI-2. TRM is trained from scratch and iteratively refines an answer rather than relying on ever-larger pretrained LLMs. The model embeds the input question x, an initial answer y and a latent state z, then runs up to K improvement steps: each step performs n recursive updates of z conditioned on x, y and z (the “recursive reasoning” phase), followed by an update of y given the current y and z. This lightweight loop lets the network correct earlier mistakes and progressively improve outputs while remaining parameter-efficient and less prone to overfitting. Significance: TRM challenges the prevailing large-model-centric mindset by showing that careful algorithmic structure—iterative, self-refining computation—can unlock strong performance on hard reasoning benchmarks with tiny models. It simplifies ideas from recent Hierarchical Reasoning Model work, removing biological analogies, hierarchies or fixed-point theorems, and highlights recursion as a practical mechanism for compositional reasoning. The mixed ARC-AGI results (strong on AGI-1, weak on AGI-2) also underline both promise and limits: recursive refinement can be a cost-effective research direction, but further work is needed to scale robustness and generalization.
Loading comments...
loading comments...