Kazuki1450/Qwen3-1.7B-Base_csum_6_10_rel_1e-5_1p0_0p0_1p0_grpo_2_rule is a 2 billion parameter language model, fine-tuned from Qwen/Qwen3-1.7B-Base. This model was trained using the GRPO method, as introduced in the DeepSeekMath paper, to enhance mathematical reasoning capabilities. With a 40960 token context length, it is optimized for tasks requiring robust logical and mathematical processing. It is suitable for applications demanding improved reasoning in a compact model size.
Loading preview...
Model Overview
Kazuki1450/Qwen3-1.7B-Base_csum_6_10_rel_1e-5_1p0_0p0_1p0_grpo_2_rule is a 2 billion parameter language model, fine-tuned from the base Qwen/Qwen3-1.7B-Base model. This model leverages the GRPO (Gradient-based Reward Policy Optimization) method, a technique highlighted in the research paper "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models" (arXiv:2402.03300). The fine-tuning process was conducted using the TRL framework.
Key Characteristics
- Base Model: Qwen3-1.7B-Base architecture.
- Parameter Count: Approximately 2 billion parameters.
- Context Length: Supports a substantial context window of 40960 tokens.
- Training Method: Utilizes GRPO, suggesting an emphasis on improving reasoning and problem-solving abilities, particularly in mathematical contexts.
Potential Use Cases
- Mathematical Reasoning: Ideal for tasks requiring logical deduction and mathematical problem-solving, given its GRPO training.
- Text Generation: Capable of general text generation, building upon the Qwen3-1.7B-Base capabilities.
- Research and Development: Suitable for researchers exploring the impact of GRPO on smaller language models and their reasoning performance.