Waynerd7/Qwen2.5-Coder-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2.5-Coder-7B-Instruct is a 7.61 billion parameter instruction-tuned causal language model from the Qwen2.5-Coder series, developed by Qwen. This model is specifically designed for code generation, code reasoning, and code fixing, building upon the strong Qwen2.5 foundation with 5.5 trillion tokens of training data including source code. It features a transformer architecture with RoPE, SwiGLU, and RMSNorm, and supports a full context length of 131,072 tokens, making it highly effective for complex coding tasks and long-context applications.

Loading preview...