ryzax/x
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Dec 20, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Qwen3-0.6B is a 0.6 billion parameter causal language model from the Qwen series, developed by Qwen. It uniquely supports seamless switching between a 'thinking mode' for complex reasoning, math, and coding, and a 'non-thinking mode' for efficient general dialogue. This model excels in reasoning, instruction-following, agent capabilities, and multilingual support across over 100 languages, with a context length of 32,768 tokens.

Loading preview...

Qwen3-0.6B: A Versatile Language Model with Dynamic Thinking Modes

Qwen3-0.6B is a 0.6 billion parameter causal language model from the Qwen series, designed for advanced reasoning, instruction-following, and multilingual applications. A key innovation is its ability to dynamically switch between a 'thinking mode' for complex logical reasoning, mathematics, and code generation, and a 'non-thinking mode' for efficient, general-purpose dialogue. This dual-mode functionality ensures optimal performance across diverse scenarios.

Key Capabilities

  • Dynamic Thinking Modes: Seamlessly transitions between a reasoning-focused mode and an efficient general dialogue mode, enhancing performance for specific tasks.
  • Enhanced Reasoning: Demonstrates significant improvements in mathematical problem-solving, code generation, and commonsense logical reasoning compared to previous Qwen models.
  • Superior Human Preference Alignment: Excels in creative writing, role-playing, and multi-turn conversations, delivering natural and engaging interactions.
  • Advanced Agent Capabilities: Integrates precisely with external tools in both thinking and non-thinking modes, achieving leading performance in complex agent-based tasks among open-source models.
  • Multilingual Support: Supports over 100 languages and dialects with strong multilingual instruction following and translation abilities.

Best Practices for Optimal Performance

  • Sampling Parameters: Recommended settings vary by mode: Temperature=0.6, TopP=0.95, TopK=20 for thinking mode; Temperature=0.7, TopP=0.8, TopK=20 for non-thinking mode. Greedy decoding is discouraged for thinking mode.
  • Output Length: An output length of 32,768 tokens is recommended for most queries, extending to 38,912 for highly complex problems.
  • Standardized Output: Use specific prompts for math problems (e.g., "Please reason step by step, and put your final answer within \boxed{}") and multiple-choice questions (e.g., JSON structure for the answer).
  • Agentic Use: Qwen3 excels in tool calling, with Qwen-Agent recommended for leveraging its agentic abilities.