Qwen/Qwen3-235B-A22B is a 235 billion parameter Mixture-of-Experts (MoE) causal language model developed by Qwen, with 22 billion parameters activated per token. This model uniquely supports seamless switching between a 'thinking mode' for complex reasoning, math, and coding, and a 'non-thinking mode' for efficient general dialogue. It excels in reasoning capabilities, human preference alignment, agentic tasks, and multilingual instruction following across over 100 languages, supporting a native context length of 32,768 tokens.
No reviews yet. Be the first to review!