🤖 AI Summary
Researchers from USC and MIT published a Nature Photonics study describing a laser-array optical processor that could dramatically cut the energy and space costs of AI compute. The team demonstrated a photonic chip with more than 400 surface-emitting lasers (VCSELs) packed into 1 cm² that directly converts electronic memory into light at a 10 GHz clock — equivalent to 10 billion neural activations per second. They report up to a 100× improvement in both throughput and energy efficiency over current ML processors, with the per-conversion energy down to several attojoules (roughly the energy of ten visible photons), which is 5–6 orders of magnitude better than modern optical modulators. The authors say near-term engineering could yield another two orders of magnitude gain, implying far larger improvements ahead.
This matters because large DNNs (training GPT-4 used ~25,000 GPUs and ~50 MWh of electricity) are hitting electronic and thermal bottlenecks. Previous optical neural networks suffered from poor electro‑optic conversion, large device footprints, crosstalk, and latency from lacking inline nonlinearity; integrating dense laser arrays addresses many of those limits by increasing compute density and lowering conversion energy. If scalable, laser-array optoelectronic processors could accelerate ML workloads from data centers to edge devices, cutting carbon and enabling new architectures for training and inference.
Loading comments...
login to comment
loading comments...
no comments yet