ZadyJ/Qwen3-1.7B

TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Apr 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen3-1.7B is a 1.7 billion parameter causal language model from the Qwen series, developed by Qwen. It features a unique capability to seamlessly switch between a 'thinking mode' for complex reasoning, math, and coding, and a 'non-thinking mode' for general dialogue, all within a single model. With a 32,768 token context length, it excels in instruction-following, agent capabilities, and multilingual support across over 100 languages.

Loading preview...

Qwen3-1.7B: A Versatile Language Model with Dynamic Reasoning

Qwen3-1.7B is a 1.7 billion parameter causal language model, part of the latest Qwen series. It introduces a novel feature allowing seamless switching between a 'thinking mode' for complex logical reasoning, mathematics, and code generation, and a 'non-thinking mode' for efficient, general-purpose dialogue. This dynamic capability ensures optimal performance across diverse scenarios.

Key Capabilities

  • Dynamic Reasoning: Uniquely supports on-the-fly switching between a dedicated reasoning mode and a general dialogue mode, enhancing performance in both complex and simple tasks.
  • Enhanced Reasoning: Demonstrates significant improvements in mathematical problem-solving, code generation, and commonsense logical reasoning compared to previous Qwen models.
  • Human Preference Alignment: Excels in creative writing, role-playing, multi-turn conversations, and instruction following, providing a more natural and engaging user experience.
  • Agentic Functionality: Offers strong tool-calling capabilities, integrating precisely with external tools in both thinking and non-thinking modes, achieving leading performance in complex agent-based tasks among open-source models.
  • Multilingual Support: Supports over 100 languages and dialects with robust multilingual instruction following and translation abilities.

Best Practices for Optimal Performance

To maximize performance, specific sampling parameters are recommended for each mode:

  • Thinking Mode: Use Temperature=0.6, TopP=0.95, TopK=20, and MinP=0. Avoid greedy decoding.
  • Non-Thinking Mode: Use Temperature=0.7, TopP=0.8, TopK=20, and MinP=0.

An output length of 32,768 tokens is recommended for most queries, extending to 38,912 tokens for highly complex problems.