shamilmohammedi/Azhar-Model-v0.3-Penta-Study
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The Azhar-Model-v0.3-Penta-Study is a 7.6 billion parameter Qwen2.5-based instruction-tuned causal language model developed by shamilmohammedi. Fine-tuned using Unsloth and Huggingface's TRL library, it leverages efficient training methods for enhanced performance. This model is designed for general language understanding and generation tasks, building upon the capabilities of its Qwen2.5 base.
Loading preview...