Qwen/Qwen3-235B-A22B

Warm
Public
235B
FP8
32768
Apr 27, 2025
License: apache-2.0
Hugging Face
Overview

Qwen3-235B-A22B: A Flexible MoE Language Model

Qwen3-235B-A22B is a 235 billion parameter Mixture-of-Experts (MoE) causal language model from the Qwen series, activating 22 billion parameters per token. It features a unique capability to seamlessly switch between a 'thinking mode' for complex logical reasoning, mathematics, and coding, and a 'non-thinking mode' for efficient, general-purpose dialogue. This flexibility allows for optimal performance across diverse scenarios.

Key Capabilities

  • Dynamic Thinking Modes: Users can explicitly enable or disable thinking mode via enable_thinking in the tokenizer or dynamically switch using /think and /no_think tags in prompts.
  • Enhanced Reasoning: Demonstrates significant improvements in mathematical problem-solving, code generation, and commonsense logical reasoning compared to previous Qwen models.
  • Superior Human Alignment: Excels in creative writing, role-playing, multi-turn conversations, and instruction following, providing a more natural and engaging user experience.
  • Advanced Agent Capabilities: Achieves leading performance among open-source models in complex agent-based tasks, integrating precisely with external tools in both thinking and non-thinking modes.
  • Extensive Multilingual Support: Supports over 100 languages and dialects with strong multilingual instruction following and translation abilities.
  • Long Context Handling: Natively supports a context length of 32,768 tokens, extendable to 131,072 tokens using the YaRN method for processing long texts.

Good For

  • Applications requiring robust logical reasoning, such as complex problem-solving and code generation.
  • Interactive agents and tools that need to integrate with external functions.
  • Multilingual applications demanding strong instruction following and translation.
  • Creative writing, role-playing, and engaging conversational AI experiences.
  • Scenarios where dynamic switching between detailed reasoning and efficient general dialogue is beneficial.