NovaSky-AI/Sky-T1-32B-Preview

Warm
Public
32.8B
FP8
131072
License: apache-2.0
Hugging Face
Overview

Sky-T1-32B-Preview: A Specialized Reasoning Model

Sky-T1-32B-Preview is a 32.8 billion parameter language model developed by the NovaSky Team from Sky Computing Lab at UC Berkeley. It is fine-tuned from the Qwen2.5-32B-Instruct architecture, specifically optimized for advanced reasoning capabilities in mathematics and coding.

Key Capabilities & Performance

This model demonstrates strong performance across various benchmarks, often rivaling or exceeding its base model and other competitors like o1-preview in specific domains:

  • Mathematical Reasoning: Achieves 82.4 on Math500 and 43.3 on AIME2024, indicating robust problem-solving skills.
  • Code Generation & Understanding: Scores 86.3 on LiveCodeBench-Easy, 56.8 on LiveCodeBench-Medium, and 17.9 on LiveCodeBench-Hard, showcasing proficiency across different coding difficulty levels.
  • Specialized Training: Fine-tuned using 17K verified correct responses from Qwen/QwQ-32B-Preview on coding and math, supplemented with science data from the Still-2 paper.

Training Details

The model underwent supervised fine-tuning with a batch size of 96, utilizing Llama-Factory. The training process took 19 hours on 8 H100 GPUs with DeepSpeed Zero-3 Offload, highlighting an efficient training methodology.

Good For

  • Complex Mathematical Problem Solving: Ideal for applications requiring high accuracy in mathematical reasoning.
  • Code Generation and Debugging: Suitable for developers needing assistance with coding tasks, from easy to hard.
  • Research in Reasoning Models: Provides a strong open-source baseline for further research and development in specialized reasoning AI.