DeepSeek-R1-Distill-Qwen-7B: Reasoning Distilled
DeepSeek-R1-Distill-Qwen-7B is a 7.6 billion parameter model from DeepSeek AI, part of their DeepSeek-R1 series. This model is a distilled version of the larger DeepSeek-R1, which itself is a first-generation reasoning model trained via large-scale reinforcement learning (RL) without initial supervised fine-tuning (SFT).
Key Capabilities & Features
- Reasoning Distillation: Leverages reasoning patterns from the powerful DeepSeek-R1 model, demonstrating that complex reasoning can be effectively transferred to smaller, dense models.
- Enhanced Performance: Shows strong performance across various benchmarks, particularly in math, code, and general reasoning tasks, outperforming several larger models in its class.
- Qwen2.5 Base: Built upon the Qwen2.5-Math-7B architecture, integrating its strengths with DeepSeek-R1's advanced reasoning.
- Long Context: Supports a context length of 32,768 tokens, enabling processing of extensive inputs.
- Open-Source: Released to support the research community in developing better smaller models.
Usage Recommendations
- Temperature: Recommended setting between 0.5-0.7 (0.6 ideal) to avoid repetitive or incoherent outputs.
- Prompting: Avoid system prompts; include all instructions within the user prompt.
- Mathematical Problems: Advised to include directives like "Please reason step by step, and put your final answer within \boxed{}" for optimal results.
- Enforced Reasoning: To ensure thorough reasoning, enforce the model to start its response with "\n" at the beginning of every output.
This model is ideal for applications requiring robust reasoning in a more compact form factor, benefiting from the advanced RL-driven reasoning capabilities of its larger DeepSeek-R1 parent.