🤖 AI Summary
Torchcurves is a new PyTorch module that provides fully differentiable parametric curves with learnable coefficients (e.g., B‑Spline control points, Legendre polynomial bases) that plug directly into PyTorch’s autograd. It’s designed for use as continuous numerical embeddings (for factorization machines, transformers), as activation/function-basis layers in Kolmogorov–Arnold Networks (KANs), and for tasks like robotics path planning. The package ships vectorized, batch-friendly implementations, example models (embedding layers and KAN-style stacks), tests, and documentation, and can be installed via pip.
Key technical points: a custom autograd function preserves correct gradient flow through curve evaluation; implementations include B‑SplineCurve, LegendreCurve and helper layers like tc.Sum; you can configure knots, polynomial degree, and control-point counts. Inputs are expected on a compact interval (typically [-1,1]); the library provides normalization strategies (rational scaling based on a modified Legendre spectral method, a clamp strategy, or user-supplied mappings such as an erf-based clamp). Because the curve parametrization is learnable and flexible, torchcurves lets researchers experiment with alternative continuous embeddings and activation parameterizations (e.g., replacing classic KAN parametrizations), offering a compact, expressive building block for ML architectures that need smooth, differentiable function bases.
Loading comments...
login to comment
loading comments...
no comments yet