🤖 AI Summary
Nvidia is reportedly preparing to ship fully assembled Level‑10 (L10) VR200 “compute trays” next year — prebuilt modules that include a Vera CPU, Rubin GPUs, memory, NICs, power-delivery hardware, midplane interfaces and liquid‑cooling cold plates — instead of just supplying chips, boards or partial sub‑assemblies. According to J.P. Morgan reporting on the move (unofficial and not yet confirmed by Nvidia), this would extend past prior L7–L8 integrations (like the GB200/Bianca board) to deliver a near‑complete server “compute engine” that partners only need to drop into racks, add chassis‑level power/sidecars/BMCs, and perform final testing.
For AI/ML infrastructure this is significant: it accelerates deployment and lowers engineering costs for hyperscalers and ODMs by shifting complex, high‑risk PCB and cooling design to Nvidia and contract EMS firms, but it also centralizes control and margins with Nvidia while eroding OEM differentiation. Technical drivers include much higher Rubin GPU TDPs (reported 1.4 kW → 1.8 kW, with unconfirmed SKUs up to ~2.3 kW) and rising cooling complexity that favors standardized liquid/immersion solutions. The change would leave partners as system integrators and service providers, maintainers of firmware and enterprise features, while Nvidia standardizes the compute heart — with wider implications for supply‑chain power dynamics, OEM business models, and rack‑scale designs (e.g., Kyber NVL576 and emerging 800V architectures).
Loading comments...
login to comment
loading comments...
no comments yet