Arun63/qwen-coder-7b-instruct

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Arun63/qwen-coder-7b-instruct is a 7.6 billion parameter instruction-tuned Qwen2 model developed by Arun63. This model is specifically fine-tuned for coding tasks, leveraging Unsloth and Huggingface's TRL library for accelerated training. It is optimized for code generation and understanding, making it suitable for developer-centric applications.

Loading preview...

Model Overview

Arun63/qwen-coder-7b-instruct is a 7.6 billion parameter language model, fine-tuned from the unsloth/qwen2.5-coder-7b-instruct-bnb-4bit base model. Developed by Arun63, this model is specifically engineered for coding applications.

Key Characteristics

  • Architecture: Based on the Qwen2 model family.
  • Parameter Count: 7.6 billion parameters.
  • Training Optimization: Fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training.
  • Context Length: Supports a context length of 32768 tokens.

Primary Use Case

This model is primarily designed for code-related tasks. Its fine-tuning process, which utilized Unsloth for efficiency, suggests a focus on performance in coding benchmarks and practical development scenarios. Developers looking for a specialized model for code generation, completion, or analysis may find this model particularly useful due to its targeted optimization.