hariharanv04/qwen2.5-coder-7b-metadata-128k-dr
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 23, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The hariharanv04/qwen2.5-coder-7b-metadata-128k-dr is a 7.6 billion parameter Qwen2.5-Coder model, finetuned by hariharanv04. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. With a 128k context length, it is optimized for coding tasks and efficient processing of long code sequences.
Loading preview...
Model Overview
This model, hariharanv04/qwen2.5-coder-7b-metadata-128k-dr, is a 7.6 billion parameter Qwen2.5-Coder variant, finetuned by hariharanv04. It was developed using unsloth/qwen2.5-coder-7b-instruct-bnb-4bit as its base and leverages Unsloth and Huggingface's TRL library for accelerated training, achieving 2x faster finetuning.
Key Characteristics
- Base Model: Qwen2.5-Coder-7B-Instruct
- Parameter Count: 7.6 billion parameters
- Context Length: Supports a substantial 128k token context window, making it suitable for handling extensive codebases or detailed technical documentation.
- Training Efficiency: Benefits from Unsloth's optimizations for faster training.
Ideal Use Cases
- Code Generation and Completion: Its coder-specific finetuning suggests strong performance in generating and completing programming code.
- Long Context Code Analysis: The 128k context window is particularly beneficial for tasks requiring understanding and processing large blocks of code or multiple related files.
- Developer Tools: Suitable for integration into IDEs, code assistants, or other developer-centric applications where efficient code understanding and generation are critical.