Qwen/Qwen3-235B-A22B
Hugging Face
TEXT GENERATIONConcurrency Cost:4Model Size:235BQuant:FP8Ctx Length:32kPublished:Apr 27, 2025License:apache-2.0Architecture:Transformer1.1K Open Weights Warm

Qwen/Qwen3-235B-A22B is a 235 billion parameter Mixture-of-Experts (MoE) causal language model developed by Qwen, with 22 billion parameters activated per token. This model uniquely supports seamless switching between a 'thinking mode' for complex reasoning, math, and coding, and a 'non-thinking mode' for efficient general dialogue. It excels in reasoning capabilities, human preference alignment, agentic tasks, and multilingual instruction following across over 100 languages, supporting a native context length of 32,768 tokens.

Loading preview...

Qwen3-235B-A22B: A Flexible MoE Language Model

Qwen3-235B-A22B is a 235 billion parameter Mixture-of-Experts (MoE) causal language model from the Qwen series, activating 22 billion parameters per token. It features a unique capability to seamlessly switch between a 'thinking mode' for complex logical reasoning, mathematics, and coding, and a 'non-thinking mode' for efficient, general-purpose dialogue. This flexibility allows for optimal performance across diverse scenarios.

Key Capabilities

  • Dynamic Thinking Modes: Users can explicitly enable or disable thinking mode via enable_thinking in the tokenizer or dynamically switch using /think and /no_think tags in prompts.
  • Enhanced Reasoning: Demonstrates significant improvements in mathematical problem-solving, code generation, and commonsense logical reasoning compared to previous Qwen models.
  • Superior Human Alignment: Excels in creative writing, role-playing, multi-turn conversations, and instruction following, providing a more natural and engaging user experience.
  • Advanced Agent Capabilities: Achieves leading performance among open-source models in complex agent-based tasks, integrating precisely with external tools in both thinking and non-thinking modes.
  • Extensive Multilingual Support: Supports over 100 languages and dialects with strong multilingual instruction following and translation abilities.
  • Long Context Handling: Natively supports a context length of 32,768 tokens, extendable to 131,072 tokens using the YaRN method for processing long texts.

Good For

  • Applications requiring robust logical reasoning, such as complex problem-solving and code generation.
  • Interactive agents and tools that need to integrate with external functions.
  • Multilingual applications demanding strong instruction following and translation.
  • Creative writing, role-playing, and engaging conversational AI experiences.
  • Scenarios where dynamic switching between detailed reasoning and efficient general dialogue is beneficial.