🤖 AI Summary
Researchers at the University of Patras unveiled IMCE, a distributed, FPGA‑based emulation environment for in‑memory computing (IMC) systems that speeds up prototyping, debugging and validation of AIMC/DIMC accelerators and whole SoC integrations before silicon is available. IMCE combines a Front‑End FPGA (IMCE‑FE) that mimics chip I/O, multiple Processing Units (IMCE‑PUs) implemented on FPGA boards (analog AnPU and digital DiPU), and a Configuration & Data Analytics server (IMCE‑CDA) for mapping models, generating inputs and collecting low‑latency statistics. The platform supports realistic tracing, intentionally injected noise for NVM behavior studies, and essentially unlimited test sequences—capabilities impractical on physical NVM chips alone—making it valuable for both architectural exploration and reliability/accuracy analysis.
Technically, IMCE organizes PUs as pipelined NPU clusters communicating over 1/10 Gbps links via dedicated ComEng modules (TCP/IP) and a hidden shared DRAM region for fast telemetry. Each IMCE‑PU hosts four ARM A53 and two Cortex‑R5 cores running F/S‑thread pairs for dynamic graph transitions and resource allocation. The An‑Accelerator handles the analog workloads (MVM and Conv2D), supporting matrices up to 4096×512, INT8 compute with FP32 scaling, URAM weight staging, and 512 DSP48E2 units for parallel MACs. Together with software mapping tools on the CDA, IMCE provides an expandable, high‑fidelity environment to evaluate performance, accuracy tradeoffs and system‑level interactions of in‑memory AI accelerators.
Loading comments...
login to comment
loading comments...
no comments yet