Hothaifa/Hajeen-V5-03
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Hothaifa/Hajeen-V5-03 is a 7.6 billion parameter Qwen2-based causal language model developed by Hothaifa. This instruction-tuned model was finetuned from unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit using Unsloth and Huggingface's TRL library, enabling faster training. With a 32768 token context length, it is optimized for tasks requiring extensive context processing and instruction following.
Loading preview...
Hothaifa/Hajeen-V5-03: An Instruction-Tuned Qwen2 Model
Hothaifa/Hajeen-V5-03 is a 7.6 billion parameter instruction-tuned language model built upon the Qwen2 architecture. Developed by Hothaifa, this model leverages the unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit as its base, indicating a potential focus on coding or instruction-following tasks.
Key Capabilities & Training
- Architecture: Based on the robust Qwen2 model family.
- Parameter Count: Features 7.6 billion parameters, offering a balance between performance and computational efficiency.
- Context Length: Supports a substantial context window of 32768 tokens, suitable for processing longer inputs and generating coherent, extended responses.
- Optimized Finetuning: The model was finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
Good For
- Applications requiring a capable instruction-following model with a large context window.
- Tasks where efficient training methods are a priority, given its Unsloth-optimized finetuning.
- Use cases benefiting from the Qwen2 architecture's general language understanding and generation capabilities.