Alelcv27/Qwen2.5-7B-Code-v2

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Qwen2.5-7B-Code-v2 is a 7.6 billion parameter Qwen2-based causal language model developed by Alelcv27, fine-tuned from unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, emphasizing faster training. It is optimized for code-related tasks, leveraging its Qwen2 architecture and specific fine-tuning.

Loading preview...

Model Overview

Alelcv27/Qwen2.5-7B-Code-v2 is a 7.6 billion parameter language model developed by Alelcv27. It is a fine-tuned variant of the Qwen2.5-7B-Instruct model, specifically leveraging the unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit base.

Key Characteristics

  • Architecture: Based on the Qwen2 family of models.
  • Parameter Count: 7.6 billion parameters.
  • Training Efficiency: This model was trained with Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
  • License: Distributed under the Apache-2.0 license.

Good For

  • Code-related tasks: Given its name and fine-tuning origin, it is likely optimized for code generation, completion, and understanding.
  • Applications requiring efficient training: The use of Unsloth suggests a focus on optimizing training speed and resource utilization.
  • Developers familiar with Qwen2 models: Users already working with Qwen2-based models will find this a familiar and potentially enhanced option for specific use cases.