🤖 AI Summary
High Bandwidth Flash (HBF) is being proposed as a cheaper, ultra-high-capacity complement to GPU High Bandwidth Memory (HBM) by stacking many full 3D‑NAND dies on top of base logic and routing them to GPUs through an interposer. Unlike HBM (expensive stacked DRAM with 8–16 layers today — SK Hynix’s 16‑Hi gives 48 GB, and future HBM4/5 aim for much higher bandwidth and thousands of TSVs), HBF would use stacked 3D‑NAND dies (SK Hynix ships 238‑layer 512 Gb dies and has 321‑layer tech coming). A 12‑Hi HBF made from 238‑layer dies would total ~2,866 layers and ~768 GB; a 16‑Hi stack of 321‑layer dies could exceed 5,000 layers and surpass 1 TB. Flash brings much lower cost-per-bit but higher latency and lower bandwidth than DRAM.
The catch is staggering engineering complexity: every additional 3D‑NAND stack requires vertical channels to base logic, more TSVs or rerouted wiring through/around other dies, and a far more complex interposer and routing fabric to get signals to the GPU. Physical size, signal integrity, and thermal and manufacturing challenges grow non‑linearly as stacks scale. Standardization (and GPU vendor involvement — e.g., Nvidia) will be crucial to avoid monopolies and enable multi‑supplier ecosystems; Sandisk and SK Hynix are already active. Given these plumbing and integration hurdles, HBF looks promising but likely remains at least two+ years from commercial viability.
Loading comments...
login to comment
loading comments...
no comments yet