π TimeOmni-1-7B: Generalized Time Series Reasoning Model
We present TimeOmni-1, the first generalized, unified model for time series reasoning. It first injects temporal priors through supervised fine-tuning. Then, reinforcement learning with task-grounded rewards guides the model beyond mimicking priors toward robust reasoning. Experiments show that TimeOmni-1 achieves top-tier performance while preserving the general reasoning ability of the base model. Finally, we demonstrate that joint training across diverse reasoning tasks yields mutual gains, supporting a βtrain-once, use-across-tasksβ paradigm for future time series reasoning models.
π¨ Task Illustration
π§ Method
TimeOmni-1 is a generalized reasoning model for time series. Pretrained LLMs often lack temporal priors because they are rarely exposed to time series during pretraining. To address this, we use a two-stage training pipeline: (1) supervised fine-tuning (SFT) to inject temporal priors and anchor the model in a temporal knowledge space, and (2) reinforcement learning (RL) with task-grounded rewards (see Reward Evaluation in the figure above) to improve robustness and reasoning quality.
π Benchmarks
Note: All metrics below are computed only on valid responses. βββ indicates a success rate (SR) below 10%; in such cases, results are omitted due to insufficient statistical significance, and we therefore do not report them.
| Task1 ID (ACCβ/SR) | Task1 OOD (ACCβ/SR) | Task2 ID (ACCβ/SR) | Task2 OOD (ACCβ/SR) | Task3 ID (MAEβ/SR) | Task3 OOD (MAEβ/SR) | Task4 ID (ACCβ/SR) | Task4 OOD (ACCβ/SR) | |
|---|---|---|---|---|---|---|---|---|
| Time Series Language Model | ||||||||
| Time-MQA Llama3-8B | 32.2/29.5 | 25.1/32.6/ | 30.1/44.3 | 31.2/37.2 | -/1.4 | -/0.4 | 12.0/13.3 | 11.6/15.8 |
| Time-MQA Mistral-7B-v0.3 | 15.1/21.5 | 27.8/22.1 | 8.4/50.2 | 4.0/52.2 | -/0.2 | -/0.0 | 5.4/36.1 | 10.0/47.3 |
| Time-MQA Qwen2.5-7B | 25.0/14.0 | 37.5/22.7 | 29.5/33.0 | 30.5/32.0 | 19.76/12.2 | -/6.5 | 23.8/58.0 | 26.4/44.3 |
| ChatTS | -/6.0 | -/6.9 | 18.2/30.1 | 18.6/26.7 | -/0.0 | -/0.0 | 5.8/27.1 | 11.1/27.1 |
| ChatTime-7B-Chat | 18.2/11.0 | 29.8/12.7 | -/- | -/- | 14.47/100.0 | 154.55/100.0 | -/0.0 | -/0.0 |
| ITFormer-7B | 43.8/100.0 | 47.5/100.0 | 15.0/47.0 | 14.6/42.0 | 29.55/96.0 | 230.04/100.0 | 25.0/100.0 | 41.7/100.0 |
| OpenTSLM-llama-3.2-3b-ecg-flamingo | -/5.0 | -/3.2 | 1.6/23.0 | 3.3/26.5 | -/0.2 | -/0.0 | 17.8/98.4 | 16.2/98.9 |
| Time Series Reasoning Model | ||||||||
| Time-R1 | 30.9/94.0 | 34.0/92.5 | 30.2/53.8 | 31.4/48.9 | 17.61/38.7 | -/6.3 | 27.8/95.7 | 32.2/93.1 |
| Ours | ||||||||
| TimeOmni-1 | 90.7/97.5 | 87.7/98.3 | 69.3/99.8 | 64.0/99.8 | 14.30/93.8 | 145.53/82.3 | 47.9/100.0 | 58.9/100.0 |
π Usage
This repository hosts the model weights for TimeOmni-1. For installation, usage instructions, and further documentation, please visit our GitHub repository.
License
TimeOmni-1 is licensed under the Apache 2.0 license. It is finetuned from Qwen2.5-7B-Instruct under Apache 2.0.
βοΈ Citation
@inproceedings{
guan2026timeomni,
title={TimeOmni-1: Incentivizing Complex Reasoning with Time Series in Large Language Models},
author={Tong Guan and Zijie Meng and Dianqi Li and Shiyu Wang and Chao-Han Huck Yang and Qingsong Wen and Zuozhu Liu and Sabato Marco Siniscalchi and Ming Jin and Shirui Pan},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=kOIclg7muL}
}
- Downloads last month
- 2,275