zsqzz/Qwen3-1.7B_openthoughts_sft_step198

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The zsqzz/Qwen3-1.7B_openthoughts_sft_step198 model is a 1.7 billion parameter causal language model from the Qwen3 series, developed by Qwen. It uniquely supports seamless switching between a 'thinking mode' for complex logical reasoning, math, and coding, and a 'non-thinking mode' for general-purpose dialogue, with a context length of 32,768 tokens. This model excels in reasoning capabilities, human preference alignment for creative writing and role-playing, and agent capabilities with external tool integration, supporting over 100 languages.

Loading preview...

Qwen3-1.7B: Dual-Mode Reasoning and Multilingual LLM

Qwen3-1.7B is a 1.7 billion parameter causal language model from the Qwen series, distinguished by its innovative dual-mode operation. It can seamlessly switch between a 'thinking mode' for complex logical reasoning, mathematics, and code generation, and a 'non-thinking mode' for efficient, general-purpose dialogue. This flexibility ensures optimal performance across diverse tasks.

Key Capabilities

  • Adaptive Reasoning: Uniquely supports dynamic switching between thinking and non-thinking modes, enhancing performance in both analytical and conversational scenarios.
  • Enhanced Reasoning: Demonstrates significant improvements in mathematical problem-solving, code generation, and commonsense logical reasoning.
  • Human Preference Alignment: Excels in creative writing, role-playing, multi-turn dialogues, and instruction following, providing a more natural conversational experience.
  • Agentic Functionality: Offers strong capabilities for integrating with external tools, achieving leading performance in complex agent-based tasks among open-source models.
  • Multilingual Support: Capable of handling over 100 languages and dialects, with robust multilingual instruction following and translation abilities.

Best Practices for Usage

  • Sampling Parameters: Recommended settings vary by mode: Temperature=0.6, TopP=0.95, TopK=20 for thinking mode; Temperature=0.7, TopP=0.8, TopK=20 for non-thinking mode. Avoid greedy decoding in thinking mode.
  • Output Length: Suggests an output length of 32,768 tokens for most queries, extending to 38,912 for highly complex problems.
  • Standardized Output: Provides guidance for structuring prompts to standardize outputs, especially for math and multiple-choice questions.