A unified model of memory and perception [pdf] (www.sissa.it)

🤖 AI Summary
Researchers at SISSA published a model in Neuron showing that two opposite perceptual biases—contraction (perception pulled toward recent stimuli) and repulsion (perception pushed away)—can emerge from the same biologically plausible mechanism. Using a recurrent neural network with Hebbian plasticity ("cells that fire together wire together"), the team demonstrated that continuous, local synaptic updates create transient memory traces that reshape incoming sensory representations. The single circuit reproduced behavioral data from humans and rodents across four datasets and three paradigms (working memory, reference memory, and a bespoke one‑back task) without task‑specific parameter tuning; the different biases arise simply from which aspect of the network’s activity downstream readouts sample. The model’s dynamics behave like a sliding attractor whose synaptic history subtly steers perception of new inputs. For the AI/ML community this unification matters because it shows how simple, local learning rules and recurrent dynamics can yield flexible, context‑dependent memory and perceptual behavior without modular architectures or explicit controllers. Practically, the work suggests biologically inspired design principles—Hebbian updates, sliding attractors, and readout‑dependent computation—that could inform continual learning, robust representation, and low‑supervision memory modules in artificial systems, and provides an experimentally validated bridge between neuroscience mechanisms and scalable learning architectures.
Loading comments...
loading comments...