🤖 AI Summary
            EvoAttention is an open-source framework that uses evolutionary algorithms to automatically discover attention-mechanism architectures instead of assuming scaled dot‑product + softmax is optimal. Running a 10‑generation search (population 12, 5k training steps per individual, top‑3 elitism, 9 offspring per generation) on 2‑layer transformers for WikiText‑2 language modeling produced a best model with perplexity 98.45 vs. the vanilla baseline 102.90 — a 4.3% improvement. The top discovered gene combines dot‑product similarity, sparsemax normalization, a learned temperature, and output gating; nearby high performers used similar elements (multiplicative similarity or adaptive temperature).
Technically, attention mechanisms are encoded as 4‑component genes (similarity, normalization, gating, temperature) and evolved via crossover/mutation. Consistent signals: sparsemax yields sparser, often better attention distributions than softmax; learned/adaptive temperature adds useful flexibility; output gating improves outputs across top candidates. Limitations are clear — results are at small scale (2‑layer models, WikiText‑2), show ~±1 perplexity run variance, and haven’t been validated on large models or diverse datasets. The code (MIT) is reproducible on Colab (~3 hours) and highlights that automated search can reveal simple, practical attention variants worth evaluating and scaling in downstream AI/ML settings.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet