spdev39/Qwen2.5-Coder-32B-Instruct
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The Qwen2.5-Coder-32B-Instruct is a 32.5 billion parameter instruction-tuned causal language model developed by Qwen, specifically optimized for code generation, reasoning, and fixing. Trained on 5.5 trillion tokens including extensive source code and synthetic data, it offers state-of-the-art coding abilities comparable to GPT-4o. This model also maintains strong performance in mathematics and general competencies, supporting a full context length of 131,072 tokens for complex real-world applications like Code Agents.
Loading preview...