Alelcv27/Qwen2.5-7B-Code-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Qwen2.5-7B-Code-v2 is a 7.6 billion parameter Qwen2-based causal language model developed by Alelcv27, fine-tuned from unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, emphasizing faster training. It is optimized for code-related tasks, leveraging its Qwen2 architecture and specific fine-tuning.

Loading preview...