Kazuki1450/Qwen3-1.7B-Base_csum_3_10_tok_add_1p0_0p0_1p0_grpo_42_rule
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026Architecture:Transformer Cold

Kazuki1450/Qwen3-1.7B-Base_csum_3_10_tok_add_1p0_0p0_1p0_grpo_42_rule is a 2 billion parameter language model, fine-tuned from Qwen/Qwen3-1.7B-Base. This model was trained using the GRPO method, as introduced in the DeepSeekMath paper, which focuses on enhancing mathematical reasoning capabilities. With a context length of 32768 tokens, it is optimized for tasks requiring advanced mathematical problem-solving and logical deduction. It is suitable for applications where robust mathematical reasoning is a primary requirement.

Loading preview...

Overview

This model, Kazuki1450/Qwen3-1.7B-Base_csum_3_10_tok_add_1p0_0p0_1p0_grpo_42_rule, is a fine-tuned variant of the Qwen3-1.7B-Base model, developed by Kazuki1450. It leverages a 2 billion parameter architecture and supports a substantial context length of 32768 tokens.

Key Capabilities

  • Enhanced Mathematical Reasoning: The model was specifically trained using the GRPO (Gradient-based Reward Optimization) method, a technique highlighted in the DeepSeekMath paper for improving mathematical reasoning in language models. This suggests a focus on complex problem-solving and logical deduction.
  • Fine-tuned Performance: Built upon the Qwen3-1.7B-Base, this version benefits from additional training using the TRL (Transformers Reinforcement Learning) framework, indicating potential improvements in specific task performance.

Good For

  • Mathematical and Logical Tasks: Given its training methodology, this model is particularly well-suited for applications that demand strong mathematical reasoning, such as solving equations, understanding proofs, or complex logical puzzles.
  • Research and Development: Developers and researchers exploring advanced fine-tuning techniques, especially those interested in GRPO or TRL, may find this model a valuable resource for experimentation and comparison.