Tiny-TSM: Efficiently Training a Lightweight SOTA Time Series Foundation Model (arxiv.org)

🤖 AI Summary
Researchers have introduced Tiny-TSM, an efficient time series foundation model that achieves state-of-the-art (SOTA) performance with a lightweight architecture of just 23 million parameters. It was trained on a single A100 GPU in under a week, utilizing an innovative synthetic data generation and augmentation pipeline called SynthTS. Tiny-TSM showcases remarkable capabilities by outperforming larger models in medium- and long-term forecasting tasks and delivering competitive results in short-term accuracy, without requiring complex hyperparameter tuning or neural architecture searches. The significance of Tiny-TSM lies in its practicality for resource-constrained environments, making advanced time series modeling accessible to a broader audience. A novel causal input normalization scheme is also introduced, which allows the model to be trained with next-token prediction loss, thus accelerating convergence speed and reducing training duration. This approach not only challenges traditional paradigms in model size and complexity but also opens new pathways for emerging applications in various sectors reliant on time series data analysis.
Loading comments...
loading comments...