zsqzz/Qwen3-1.7B_opsd_masked_grpo_dapo_hf

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen3-1.7B is a 1.7 billion parameter causal language model from the Qwen series, developed by Qwen. This model uniquely supports seamless switching between a 'thinking mode' for complex logical reasoning, math, and coding, and a 'non-thinking mode' for efficient general-purpose dialogue. It demonstrates enhanced reasoning capabilities, superior human preference alignment for creative writing and role-playing, and strong agent capabilities with external tool integration, supporting over 100 languages.

Loading preview...

Qwen3-1.7B: A Versatile Language Model with Dynamic Thinking Modes

Qwen3-1.7B is a 1.7 billion parameter causal language model from the Qwen series, designed for both pretraining and post-training stages. A key innovation is its ability to dynamically switch between a 'thinking mode' for complex tasks like logical reasoning, mathematics, and code generation, and a 'non-thinking mode' for general dialogue, ensuring optimized performance across diverse scenarios.

Key Capabilities

  • Dynamic Thinking Modes: Seamlessly transitions between a reasoning-focused mode and an efficient general-purpose mode, configurable via enable_thinking switch or /think and /no_think prompts.
  • Enhanced Reasoning: Significantly improves performance in mathematics, code generation, and commonsense logical reasoning compared to previous Qwen models.
  • Human Preference Alignment: Excels in creative writing, role-playing, multi-turn dialogues, and instruction following, offering a more natural conversational experience.
  • Agentic Capabilities: Demonstrates strong integration with external tools, achieving leading performance in complex agent-based tasks among open-source models.
  • Multilingual Support: Supports over 100 languages and dialects with robust multilingual instruction following and translation abilities.

Best Practices

To optimize performance, specific sampling parameters are recommended for each mode: Temperature=0.6, TopP=0.95, TopK=20, and MinP=0 for thinking mode, and Temperature=0.7, TopP=0.8, TopK=20, and MinP=0 for non-thinking mode. The model also benefits from adequate output length (up to 38,912 tokens for complex problems) and standardized output formats for benchmarking math and multiple-choice questions.