🤖 AI Summary
Researchers show that an image-based machine learning model can act as a probe of regularity in prime-number layouts by training classifiers on small image “blocks” taken from Ulam spirals at different magnitudes. Models trained on regions near ~500 million outperform those trained on regions below ~25 million in pure accuracy, indicating that prime patterns at higher orders of magnitude are more learnable. A per-class breakdown (precision vs. recall) reveals a behavior shift: at low integers the model concentrates on positively identifying primes, while at higher integers it more reliably rejects composites.
The result is significant because it ties an empirical, ML-derived notion of “learnability” to classical number‑theoretic expectations — namely that average statistical structure (density, AP equidistribution) dominates and local randomness regularizes after scaling by log x as x grows. Practically, the work presents ML as a new experimental instrument for exploring prime-distribution phenomena and suggests a route to probe structure in strong vs. weak primes with consequences for cryptography. Key technical takeaways are the use of Ulam-spiral image blocks as inputs, comparative accuracy and precision/recall analyses across magnitude regimes, and the interpretation that higher-magnitude prime fields exhibit more easily extractable, model-detectable order.
Loading comments...
login to comment
loading comments...
no comments yet