🤖 AI Summary
Researchers propose Hyb Error, a simple scalar metric for comparing an approximation x to a reference y defined as |x−y|/(1+|y|). Algebraically it equals half the harmonic mean of absolute error and relative error, so it naturally interpolates between the two: as |y|→0 it behaves like absolute error (avoiding infinite relative error on near-zero targets), and as |y|→∞ it behaves like relative error (avoiding domination by raw differences on large-magnitude targets). The metric is trivial to compute and invert (|x−y| = ε(1+|y|) when Hyb Error = ε), which ties it directly to common floating-point equality checks.
For the AI/ML community, Hyb Error is a pragmatic, scale-aware loss/validation metric for regression, numerical solvers, and scientific ML where targets span many orders of magnitude. For sequences, the Maximum Element-wise Hyb Error (MEHE) captures the worst-case normalized deviation and equals the decision boundary of standard isclose-style checks using equal absolute and relative tolerances—making thresholds interpretable and consistent with existing library semantics. Because it avoids the pathologies of pure absolute or relative errors while remaining lightweight, Hyb Error can be adopted easily in evaluation suites, monitoring, and robust training objectives where both small and large target values matter.
Loading comments...
login to comment
loading comments...
no comments yet