Neural Network Visualisation (github.com)

🤖 AI Summary
A compact, interactive web visualiser now lets you draw a digit on a 28×28 grid and watch activations propagate through a small multi-layer perceptron (MLP) trained on MNIST in 3D, with live prediction probabilities. The front end is built with Three.js and streams a timeline of weight snapshots so you can scrub training progress without downloading the whole run. The scene highlights the top-N strongest incoming connections per neuron and uses color to encode activation sign and magnitude, making the network’s dynamics immediately intelligible for teaching, demos or debugging. Under the hood: training/export is handled by a PyTorch script (training/mlp_train.py) that auto-selects Metal (MPS) on Apple Silicon or falls back to CUDA/CPU; it supports options for hidden layer sizes, epochs, batch size and a --skip-train mode. Exports produce a small manifest (exports/mlp_weights.json) pointing to 35 checkpoints—dense early snapshots and dataset-multiple milestones up to 50×—with weights stored as float16 files (exports/<stem>/NNN_<id>.json) streamed on demand. Run any static server (e.g. python3 -m http.server 8000), update VISUALIZER_CONFIG.weightUrl if needed, and open the page. It’s still rough and under active development (tablet input and museum-ready features planned), but already useful for hands-on intuition about training dynamics and internal representations.
Loading comments...
loading comments...