Intel's Nova Lake-Ax for Local LLMs – What We Know About AMD's Halo Competitor (www.hardware-corner.net)

🤖 AI Summary
Intel is rumored to be developing a high-end APU codenamed Nova Lake‑AX aimed squarely at the “big APU” market for running large local LLMs, positioning it as a direct competitor to AMD’s Strix Halo. Leaks describe a chiplet design with a 28‑core CPU (8P + 16E + 4LP) and a huge integrated GPU based on the unreleased Xe3P architecture with 384 EUs (≈6,144 FP32 lanes). Critically for inference workloads, Nova Lake‑AX is said to use a 256‑bit memory bus paired with LPDDR5X at up to 9,600–10,667 MT/s, yielding a theoretical peak bandwidth around ~341 GB/s versus Strix Halo’s 256 GB/s (LPDDR5X‑8000). The project’s timeline and even its launch remain uncertain. For local LLM users, the implications are clear: on‑paper Nova Lake‑AX promises materially more raw compute and a 33% memory‑bandwidth edge over Strix Halo, which could speed prompt processing and token generation in single‑socket systems. But timing and ecosystem matter — Intel would need efficient Xe3 software and drivers to realize those gains, while AMD’s ROCm and the imminent Medusa Halo (rumored 384‑bit bus + LPDDR6 with 480–690 GB/s) could negate the advantage. Nova Lake‑AX is exciting on paper, but real‑world performance will hinge on silicon delivery, drivers, and how AMD’s next gen responds.
Loading comments...
loading comments...