DeepSeek-R1-Distill-Qwen-7B: Reasoning Capabilities in a Compact Model
This model is a 7.6 billion parameter distilled version of DeepSeek AI's DeepSeek-R1, built upon the Qwen2.5-Math-7B base. DeepSeek-R1 itself is a first-generation reasoning model developed through large-scale reinforcement learning (RL), demonstrating advanced reasoning behaviors like self-verification and reflection without initial supervised fine-tuning (SFT).
Key Distillation Approach
DeepSeek AI's research shows that complex reasoning patterns from larger models can be effectively transferred to smaller ones. This 7B model is fine-tuned using high-quality reasoning data generated by the powerful DeepSeek-R1, allowing it to inherit sophisticated problem-solving abilities. This approach aims to provide strong reasoning performance in a more accessible and efficient package.
Performance Highlights
Evaluations indicate that the DeepSeek-R1-Distill-Qwen-7B performs well across various benchmarks, particularly in:
- Mathematics: Achieving 55.5 pass@1 on AIME 2024 and 92.8 pass@1 on MATH-500.
- Code: Scoring 37.6 pass@1 on LiveCodeBench and a CodeForces rating of 1189.
- General Reasoning: Demonstrating competitive results on GPQA Diamond.
Usage Recommendations
To achieve optimal performance, DeepSeek AI recommends specific configurations:
- Set temperature between 0.5-0.7 (0.6 recommended).
- Avoid system prompts; include all instructions in the user prompt.
- For math problems, include a directive like "Please reason step by step, and put your final answer within \boxed{}".
- Enforce the model to start its response with "\n" to ensure thorough reasoning.