Qwen/Qwen3.6-27B

Hugging Face
VISIONConcurrency Cost:2Model Size:27.8BQuant:FP8Ctx Length:32kPublished:Apr 21, 2026License:apache-2.0Architecture:Transformer1.3K Open Weights Warm

Qwen3.6-27B is a 27.8 billion parameter causal language model developed by Qwen, featuring a vision encoder and a native context length of 262,144 tokens, extensible up to 1,010,000 tokens. This model is specifically optimized for agentic coding, handling frontend workflows and repository-level reasoning with enhanced fluency and precision. It also introduces 'Thinking Preservation' to retain reasoning context from historical messages, streamlining iterative development and improving efficiency for complex coding tasks.

Loading preview...

Qwen3.6-27B: Enhanced Agentic Coding and Multimodal Capabilities

Qwen3.6-27B is a 27.8 billion parameter causal language model from Qwen, building upon the Qwen3.5 series with a focus on stability and real-world utility for developers. It features a native context length of 262,144 tokens, which can be extended up to 1,010,000 tokens using YaRN scaling techniques.

Key Capabilities & Differentiators

  • Agentic Coding: Significantly upgraded to handle frontend workflows and repository-level reasoning with improved fluency and precision. Benchmarks show strong performance on coding agent tasks like SWE-bench Verified (77.2) and Terminal-Bench 2.0 (59.3).
  • Thinking Preservation: Introduces a new option to retain reasoning context from historical messages, which streamlines iterative development, reduces overhead, and enhances decision consistency in agent scenarios.
  • Multimodal Input: Supports both image and video inputs, functioning as a Vision Language Model (VLM) with strong performance across various visual understanding benchmarks, including MMMU (82.9) and RealWorldQA (84.1).
  • Multi-Token Prediction (MTP): Supports MTP for optimized inference, recommended for specific serving frameworks like SGLang.

Ideal Use Cases

  • Code Generation and Debugging: Excels in complex coding tasks, particularly those requiring repository-level understanding and iterative refinement.
  • Agentic Applications: Highly suitable for building AI agents that require persistent reasoning context and tool-use capabilities, especially with frameworks like Qwen-Agent and Qwen Code.
  • Multimodal AI: Effective for applications involving image and video analysis, such as visual question answering, document understanding, and spatial intelligence tasks.
  • Long Context Processing: Capable of handling ultra-long texts up to 1,010,000 tokens, making it suitable for tasks requiring extensive contextual understanding.