Instructions to use ulab-ai/Time-R1-Theta1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ulab-ai/Time-R1-Theta1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ulab-ai/Time-R1-Theta1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ulab-ai/Time-R1-Theta1") model = AutoModelForCausalLM.from_pretrained("ulab-ai/Time-R1-Theta1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ulab-ai/Time-R1-Theta1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ulab-ai/Time-R1-Theta1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ulab-ai/Time-R1-Theta1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ulab-ai/Time-R1-Theta1
- SGLang
How to use ulab-ai/Time-R1-Theta1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ulab-ai/Time-R1-Theta1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ulab-ai/Time-R1-Theta1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ulab-ai/Time-R1-Theta1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ulab-ai/Time-R1-Theta1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ulab-ai/Time-R1-Theta1 with Docker Model Runner:
docker model run hf.co/ulab-ai/Time-R1-Theta1
Time-R1 Model Series
This collection hosts the official checkpoints for the Time-R1 model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation.
These models are trained using the Time-Bench dataset.
Model Checkpoints
We provide several checkpoints representing different stages of the Time-R1 training process:
Stage 1: Temporal Comprehension Models
These models are trained to develop foundational temporal understanding.
- Time-R1-S1P1: Checkpoint after Phase 1 of Stage 1 training.
- Focus: Foundational logic on easy timestamp inference tasks.
- Time-R1-S1P2: Checkpoint after Phase 2 of Stage 1 training.
- Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.
- Time-R1-Theta1: Checkpoint θ₁, after Phase 3 (full Stage 1 training).
- Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.
- Time-R1-Theta1_prime: Ablation model θ₁', trained for Stage 1 without the dynamic reward design.
- Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.
Stage 2: Future Event Time Prediction Model
This model builds upon Stage 1 capabilities to predict future event timings.
- Time-R1-Theta2: Checkpoint θ₂, after Stage 2 training.
- Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.
Please refer to the main paper for detailed discussions on the architecture, training methodology, and comprehensive evaluations.
How to Use
For loading and using these models, please refer to the example scripts and documentation provided in our GitHub repository.
Typically, you can load the models using the Hugging Face transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Example for one of the models (replace with the specific model name)
model_name = "ulab-ai/Time-R1-Theta1" # Or your specific Hugging Face model path
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Further usage instructions would go here or in the repository
Citations
@article{liu2025time,
title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
journal={arXiv preprint arXiv:2505.13508},
year={2025}
}
- Downloads last month
- 10