Qwen/Qwen3-Coder-Next
TEXT GENERATIONConcurrency Cost:2Model Size:80BQuant:FP8Ctx Length:32kPublished:Jan 30, 2026License:apache-2.0Architecture:Transformer1.2K Open Weights Warm

Qwen3-Coder-Next is an 80 billion parameter (3 billion activated) open-weight causal language model developed by Qwen, specifically engineered for coding agents and local development. It features a massive 262,144 token context length and a Mixture of Experts (MoE) architecture, enabling efficient performance comparable to much larger models. This model excels at long-horizon reasoning, complex tool usage, and recovery from execution failures, making it highly effective for dynamic coding tasks and seamless integration with various CLI/IDE platforms.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p