TempoPFN: Synthetic Pre-Training of Linear RNNs for Zero-Shot Timeseries Forecas (arxiv.org)

🤖 AI Summary
TempoPFN is a new univariate time-series foundation model built from linear RNNs and pre-trained entirely on synthetic data, introduced to tackle long-horizon zero-shot forecasting and reproducibility problems. Its core architectural novelty is a GatedDeltaProduct design with “state-weaving,” which lets the model maintain temporal state while training and inferring fully in parallel across sequence lengths—removing the need for sliding windows or heavy summarization. The authors pair this with a comprehensive synthetic data pipeline that mixes stochastic differential equation generators, Gaussian processes, audio synthesis, and bespoke augmentations to expose the model to diverse temporal dynamics. On the Gift-Eval zero-shot benchmark TempoPFN outperforms every prior synthetic-only method and beats most models trained on real-world data, while being computationally cheaper thanks to parallelizable training and inference. The result suggests that carefully designed synthetic curricula plus lightweight linear RNNs can yield robust, reproducible zero-shot forecasters without massive real-data pretraining. The team open-sourced the full data-generation and training code, making it easier for researchers to reproduce results and iterate—an important step toward scalable, efficient foundation models for time series that reduce dependence on proprietary datasets.
Loading comments...
loading comments...