🤖 AI Summary
Researchers introduce a hybrid unary‑binary architecture for running multilayer perceptron (MLP) classifiers on printed electronics (PE) that eliminates costly multipliers and the encoders typically needed for unary arithmetic. By exploiting PE’s low fabrication and NRE costs and tailoring the arithmetic to the medium, the design combines unary (bit‑serial) and binary processing to enable multiplier‑less inference and simplify circuit layout. The authors also propose architecture‑aware training that adapts model parameters to the hardware’s representational constraints, reducing the need for complex on‑chip conversion logic.
On six datasets the approach yields average reductions of 46% in silicon area and 39% in power versus prior PE MLP designs, with only minimal accuracy degradation, outperforming state‑of‑the‑art printed MLP implementations. Technically, the work shows that co‑designing numeric representation, circuit topology and training can remove expensive arithmetic blocks (multipliers and encoders) on large‑feature‑size substrates, making practical, ultra‑low‑cost ML inference on flexible printed platforms more viable. The result is significant for edge and ubiquitous sensing applications where cost, flexibility and power dominate, and it suggests broader opportunities for hardware–algorithm co‑design on non‑silicon ML substrates.
Loading comments...
login to comment
loading comments...
no comments yet