Imec's superconducting chips to shrink power usage 100x (2024) (spectrum.ieee.org)

🤖 AI Summary
Imec announced a full‑stack superconducting processor design — a “superconductor processing unit” (SPU) manufacturable with standard CMOS tools — that it says could make data‑center‑scale computing up to 100× more energy efficient, potentially shrinking a data center into a shoebox. The work targets the looming power crisis from AI scale‑up (training demands doubling roughly every six months) by moving logic and interconnects to superconducting Josephson‑junction circuits that use single‑flux‑quantum (SFQ) pulses. Imec argues that, above about 10^16 FLOPS (tens of petaflops), the cooling overhead is amortized and superconducting systems beat classical counterparts on energy per operation — a tipping point already within today’s HPC/AI envelope. Technically, Imec’s team replaced lab‑grade niobium with CMOS‑compatible niobium‑titanium‑nitride and an amorphous (alpha) silicon tunnel barrier to scale junctions down to ~210 nm. Logic uses picosecond SFQ pulses (~2.07 mV·ps; ~2×10^‑20 J per pulse) and a novel “pulse‑conserving” architecture where SFQ count is preserved through Josephson‑junction loops and inductors. They also developed resonant on‑chip power delivery (replacing bulky transformers), vertical superconducting interconnects, 3D stacking, and a glass thermal bridge linking 77 K DRAM to 4 K logic. The result is a manufacturable cryogenic stack optimized for AI; adoption will hinge on cryogenic infrastructure, software/toolchain adaptation, and co‑design of memory and algorithms to exploit ultra‑low‑energy SFQ primitives.
Loading comments...
loading comments...