Model Details

Model Description

This is a 32B reasoning model trained from Qwen2.5-32B-Instruct with 17K data. The performance is on par with o1-preview model on both math and coding. Please see our blog post for more details.

  • Developed by: NovaSky Team from Sky Computing Lab at UC Berkeley.

Training Details

Training Data

17K verified correct responses from Qwen/QwQ-32B-Preview on coding, math. In addition, we add the science portion from the Still-2 paper.

Training Procedure

We perform supervised fine tuning on the data, with a batch size of 96.

Speeds

We use Llama-Factory for training. On 8 H100, the training takes 19 hours with DeepSpeed Zero-3 Offload.

Evaluation

Sky-T1-32B-Preview Qwen-2.5-32B-Instruct QwQ o1-preview
Math500 82.4 76.2 85.4 81.4
AIME2024 43.3 16.7 50.0 40.0
LiveCodeBench-Easy 86.3 84.6 90.7 92.9
LiveCodeBench-Medium 56.8 40.8 56.3 54.9
LiveCodeBench-Hard 17.9 9.8 17.1 16.3
GPQA-Diamond 56.8 45.5 52.5 75.2

Acknowledgement

We would like to thanks the compute resources from Lambda Lab and AnyScale. We would like to thanks the academic feedback and support from the Still-2 Team, and Junyang Lin from the Qwen Team.

Citation

Please considering citing our blog post if you found it useful for your research. Thank you!

@misc{sky_t1_2025,
  author       = {NovaSky Team},
  title        = {Sky-T1: Fully open-source reasoning model with o1-preview performance in $450 budget},
  howpublished = {https://novasky-ai.github.io/posts/sky-t1},
  note         = {Accessed: 2025-01-09},
  year         = {2025}
}
Downloads last month
70
Safetensors
Model size
32.8B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NovaSky-AI/Sky-T1-32B-Preview

Base model

Qwen/Qwen2.5-32B
Finetuned
(77)
this model
Quantizations
1 model

Datasets used to train NovaSky-AI/Sky-T1-32B-Preview