🤖 AI Summary
Einx introduces a universal notation that cleanly separates the "elementary operation" (the core computation) from "vectorization" (how that operation is applied across sub-tensors). Instead of each framework giving different APIs and implicit broadcasting/vmap rules for different ops, einx uses a single, consistent string-based notation to express vectorization for every elementary operation — so each primitive has exactly one API and any pattern expressible with tools like jax.vmap can be written in einx. Examples in the proposal map many existing calls (torch.gather / take_along_dim / tf.gather_nd, broadcasting idioms, and many dots/einsum-like contractions) to unified einx forms such as einx.dot("... a [b], ... [b] c -> ... a c", x, y).
For the AI/ML community this matters because it reduces API bloat and cognitive overhead when composing tensor programs, makes vectorization semantics explicit and portable across backends, and supports all the expressive power of vmap/batched indexing. Practically, einx can represent gather/index variants, broadcasting, and complex batched contractions with a single representation, while backends remain free to call optimized kernels (e.g., matmul) instead of emulating vectorized loops. The result is a clearer, more composable interface for advanced tensor manipulation and batching patterns common in ML models and research.
Loading comments...
login to comment
loading comments...
no comments yet