Maridex/Qwen2.5-Coder-32B-Instruct
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2.5-Coder-32B-Instruct is a 32.5 billion parameter instruction-tuned causal language model developed by Qwen, part of the Qwen2.5-Coder series. Optimized for code generation, reasoning, and fixing, it scales training tokens to 5.5 trillion, including source code and text-code grounding. This model features a 131,072-token context length and is designed for advanced coding applications and Code Agents, matching GPT-4o's coding abilities.

Loading preview...