🤖 AI Summary
Researchers from Peking University reported a practical analogue solver for matrix equations (Ax = b) built from foundry-fabricated resistive RAM (RRAM) chips, demonstrating precise and scalable inversion in hardware. Their hybrid analogue scheme interleaves a closed-loop low‑precision analogue inverter (LP‑INV) — implemented on 8×8 1T1R RRAM arrays with operational amplifiers — with a high‑precision analogue matrix–vector multiply (HP‑MVM) realized via bit‑sliced programming on a 1‑Mb RRAM chip. Using this BlockAMC block‑matrix algorithm, they solved 16×16 real-valued matrices to 24‑bit fixed‑point precision (comparable to FP32) and achieved FP32-equivalent performance on massive MIMO signal detection in just three iterations.
The work addresses the long-standing precision bottleneck of analogue in‑memory computing by combining iterative LP‑INV that quickly produces approximate updates and high‑precision HP‑MVM that refines results via multilevel (3‑bit) bit‑slicing. The RRAM 1T1R cells reliably program eight conductance states (0.5–35 μS) with write‑verify control; differential encoding and block decomposition enable scaling despite feedback constraints in the INV circuit. Benchmarks suggest up to ~1,000× higher throughput and ~100× better energy efficiency versus digital processors at the same precision. This makes analogue inversion viable for signal processing, scientific computing and second‑order ML methods, while the demonstrated size (16×16) points to further engineering work to scale to larger systems.
Loading comments...
login to comment
loading comments...
no comments yet