🤖 AI Summary
A new PyTorch reference implementation—Generalized Orders of Magnitude (GOOMs)—lets you represent and compute with real numbers far beyond conventional floating-point limits by working in a complex “order-of-magnitude” domain. GOOMs map real tensors to complex-typed logs (goom.log) and back (goom.exp), and provide numerically stable linear-algebra primitives such as goom.log_matmul_exp so you can, for example, multiply long chains of matrices with a parallel-prefix (scan) operation without the usual scaling, clipping, or stabilization hacks. The library is plug-and-play (single-file or pip install), gradient-friendly, broadcastable, and includes options to keep log(0) finite, force complex types, and choose the underlying float dtype.
Technically, the implementation extends PyTorch complex types to create Complex64 and Complex128 GOOMs: each offers astronomic dynamic ranges (≈exp(10^38) and ≈exp(10^308) respectively), compared to Float32/64. Over overlapping magnitudes precision is competitive with floats, while runtime and memory typically double. The repo includes scripts and experiments (up to 1M matrix products, Lyapunov spectrum estimation with a selective-resetting prefix-scan, and deep RNNs capturing long-range non-diagonal recurrences) that showcase practical gains. The authors recommend using GOOMs selectively where float dynamic range fails, and provide scaling helpers (goom.scale, goom.scaled_exp) and documentation for integrating GOOMs into training pipelines.
Loading comments...
login to comment
loading comments...
no comments yet