Algebraic Dynamical Systems in Machine Learning (2024) (link.springer.com)

🤖 AI Summary
Researchers introduce "algebraic dynamical systems": an algebraic framework that models dynamical machine-learning architectures (RNNs, graph neural networks, diffusion models, message-passing systems) as term-rewriting systems whose outputs are evaluated by recursive functions. Using free term algebras and rewrite rules (e.g., f(x,f(y,z)) → f(f(x,y),z)) to encode model syntax, and catamorphic evaluation to give semantics, they prove a key theorem: every standard dynamical system embeds in this class of rewriting models and every rewriting model projects onto a dynamical system. Framed in category theory, this makes the syntax/semantics split explicit and shows model properties lift across categories with the right structure. This matters because it gives a unified, compositional language for dynamic models that makes structural constraints first-class—expressing parameter-sharing, symmetries (equivariance), and hierarchical or non‑numerical data directly in the signature. Practical implications include cleaner specification of inductive biases, provable preservation of properties under composition (reducing error compounding in long trajectories), and a principled route to hybrid symbolic–numeric models and learning on structured data. In short, treating dynamics as term rewriting opens formal tools from universal algebra and category theory to reason about, compose, and generalize dynamic ML architectures beyond black‑box numeric nets.
Loading comments...
loading comments...