🤖 AI Summary
Arm has released a deep technical dive into Neural Super Sampling (NSS), an AI-powered temporal upscaling solution that will ship on Arm GPUs in 2026 and is available now for developers to experiment with. NSS replaces hand-tuned temporal anti-aliasing heuristics with a learned spatiotemporal model that reduces ghosting, disocclusion artifacts and temporal instability—problems that grow worse with upscaling. Unlike rule-based TSS systems (and compared to approaches like FSR2 or Arm ASR), NSS generalizes across content and handles challenging cases such as particle effects and thin geometry without reactive masks.
Technically, NSS is trained recurrently on ~100-frame sequences: 540p inputs rendered at 1 spp paired with 1080p ground-truth at 16 spp, using a spatiotemporal loss that balances spatial fidelity and temporal stability. Inputs include color, motion vectors, depth, jitter and camera metadata; training uses PyTorch (Adam, cosine annealing), ExecuTorch for quantization-aware training, and Slang for pre/post passes. The runtime uses a four-level UNet with skip connections, GPU compute-shader preprocessing (luma derivative, disocclusion mask, reprojection of hidden features) and Vulkan ML inference, followed by a post-process shader—integrated into the render graph for mobile. Performance targets are tight (≤4 ms upscaler budget, ~27 GOPs ceiling, parameter net ≈10 GOPs); early simulations show ~75% of Arm ASR’s runtime at 1.5× upscaling and projected wins at 2×. NSS is presented as a practical, deployable pattern for ML-driven rendering on mobile.
Loading comments...
login to comment
loading comments...
no comments yet