ONNX Runtime v1.23.0 Released (github.com)

🤖 AI Summary
ONNX Runtime v1.23.0 introduces a mix of platform policy changes, execution-provider updates and runtime optimizations that will affect deployment and performance tuning. Notably, future releases will stop shipping x86_64 macOS and iOS binaries and raise the minimum supported macOS version from 13.4 to 14.0 — a compatibility break that will force macOS/iOS users to upgrade or build from source. On Windows, shutdown logic is simplified so some globals aren’t destroyed during process teardown to reduce exit-time crashes (no memory leak risk because the OS reclaims memory). Several EP-level and web/runtime features expand hardware support and usability: AutoEP can now auto-discover devices and download/register the best execution providers (EPs), with the download feature currently limited to Windows 11 25H2+. ROCm EP was removed from the source tree (AMD users should migrate to Migraphx or Vitis AI EPs), while a new Nvidia TensorRT RTX EP is available. Web and WebGPU improvements include an EMDSK bump (4.0.4→4.0.8) and WGSL template support. QNN SDK support was updated to 2.37, and KleidiAI added SME2-optimized kernels improving SGEMM/IGEMM and dynamic-quantized MatMul (notably boosting Conv2D on SME2-capable ARM hardware). Broad community and Microsoft contributors supported these changes.
Loading comments...
loading comments...