Olak17/Qwen2.5-Coder-7B-Instruct

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 10, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Olak17/Qwen2.5-Coder-7B-Instruct is a 7.61 billion parameter instruction-tuned causal language model from the Qwen2.5-Coder family, developed by Qwen. This model is specifically optimized for code generation, code reasoning, and code fixing, building upon the Qwen2.5 architecture. It supports a long context length of up to 131,072 tokens, making it suitable for complex coding tasks and real-world applications like Code Agents.

Loading preview...

Olak17/Qwen2.5-Coder-7B-Instruct Overview

This model is an instruction-tuned variant of the Qwen2.5-Coder series, a family of code-specific large language models developed by Qwen. It features 7.61 billion parameters and is built on the Qwen2.5 architecture, incorporating transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.

Key Capabilities

  • Enhanced Code Performance: Significantly improves upon previous versions in code generation, reasoning, and fixing.
  • Extensive Training: Trained on 5.5 trillion tokens, including source code, text-code grounding, and synthetic data.
  • Long Context Support: Capable of processing up to 131,072 tokens, utilizing techniques like YaRN for length extrapolation.
  • General Competencies: Maintains strong performance in mathematics and general language understanding alongside its coding prowess.

Good For

  • Code Generation: Generating various programming language code snippets.
  • Code Reasoning: Assisting with understanding and debugging code logic.
  • Code Fixing: Identifying and suggesting corrections for code errors.
  • Code Agents: Serving as a foundation for advanced code-centric AI applications.
  • Long-form Code Tasks: Handling large codebases or complex programming problems due to its extended context window.