Skywork/Skywork-OR1-Math-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 12, 2025Architecture:Transformer0.0K Cold

Skywork-OR1-Math-7B is a 7.6 billion parameter model from the Skywork-OR1 series, developed by Skywork. It is specifically optimized for mathematical reasoning tasks, achieving scores of 69.8 on AIME24 and 52.3 on AIME25. This model utilizes large-scale rule-based reinforcement learning with carefully designed datasets and training recipes, making it suitable for advanced mathematical problem-solving.

Loading preview...

Skywork-OR1-Math-7B: Specialized Mathematical Reasoning

Skywork-OR1-Math-7B is a 7.6 billion parameter model from the Skywork-OR1 series, developed by Skywork. It is specifically engineered for advanced mathematical reasoning, distinguishing itself from general-purpose language models through its specialized training methodology.

Key Capabilities

  • Exceptional Mathematical Reasoning: Achieves a score of 69.8 on AIME24 and 52.3 on AIME25, outperforming other models of similar size in these benchmarks.
  • Reinforcement Learning Training: Developed using large-scale rule-based reinforcement learning (RL) with meticulously curated datasets and a multi-stage training pipeline.
  • Data-driven Optimization: Incorporates model-aware difficulty estimation, offline and online difficulty-based filtering, and rejection sampling to enhance training efficiency and effectiveness.

Good For

  • Complex Math Problem Solving: Ideal for applications requiring high accuracy in mathematical reasoning and problem-solving.
  • Research in Reasoning Models: Provides a strong baseline and open-source resources (data, code, blog) for further research into open reasoning models.