🤖 AI Summary
This paper revisits Hebbian and anti-Hebbian learning through an AI lens, systematically defining mathematical update rules, activation functions, and practical implementations, then testing them on vision benchmarks (Fashion-MNIST, KMNIST, MNIST) with a set of explicit data-fusion protocols (s0–s7) including sequential training, independent training plus merging, and combined-dataset training. Key empirical findings show that Hebbian-style synapses can learn useful representations but are prone to overfitting (visible in degraded performance and characteristic synapse visualizations), that activation choice strongly shapes the learned features, and that data-fusion via synaptic weight merging (an explicit equation and a hybrid parameter) materially affects downstream performance—paralleling recent “model soup” observations.
Technically, the work ties classical biologically plausible rules (Hebb/anti-Hebb, winner-take-all motifs) to modern ML recipes: experiments use batch size 100, hidden layer size matching input dimensionality, tunable power and anti-Hebbian strength/indices, numerical stability thresholds, Adam optimizer and cross-entropy in supervised comparisons, and learning-rate schedules (linear decay or cosine annealing from 0.01 → 0 over 200 epochs). Implications: Hebbian schemes are viable lightweight pretraining or representation-learning primitives, but require careful regularization, activation choices, and principled merging strategies when leveraging multiple unlabeled datasets to avoid catastrophic specialization.
Loading comments...
login to comment
loading comments...
no comments yet