IBM Patented Euler's 200 Year Old Math Technique for 'AI Interpretability' (leetarxiv.substack.com)

🤖 AI Summary
IBM (via authors of a 2021 NeurIPS paper called CoFrNets) has been granted a patent claim on using derivatives to compute convergents of generalized continued fractions — effectively the classic number‑theory trick of Gauss/Euler/Ramanujan implemented as a differentiable PyTorch computation graph and optimized via backward()/autodiff. The CoFrNets design simply chains linear layers where each “bias” is replaced by another linear transform and uses a safe reciprocal as the nonlinearity, forming a generalized continued fraction that can be trained with standard gradient descent. Empirically the paper reports modest results (∼61% on a waveform task) and reiterates that continued fractions are universal approximators, but suffer gradient vanishing and other limits familiar from the analytic literature. The significance is legal and practical: patenting a centuries‑old mathematical construction expressed as a neural computation raises alarm for researchers, numerical analysts, and tool builders. If upheld, the claim could complicate open‑source implementations (PyTorch, Sage, Mathematica) and routine scientific code that differentiates continued‑fraction style formulas. From a technical perspective the idea is straightforward — replace biases with nested linear maps and rely on autodiff — so the real controversy is over patentability and prior art (Euler, etc.). The episode highlights looming friction between classical math, reproducible ML code, and IP regimes that may inadvertently throttle low‑level algorithmic building blocks.
Loading comments...
loading comments...