shamilmohammedi/Azhar-Model-v0.3-Penta-Study

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Azhar-Model-v0.3-Penta-Study is a 7.6 billion parameter Qwen2.5-based instruction-tuned causal language model developed by shamilmohammedi. Fine-tuned using Unsloth and Huggingface's TRL library, it leverages efficient training methods for enhanced performance. This model is designed for general language understanding and generation tasks, building upon the capabilities of its Qwen2.5 base.

Loading preview...

Overview

shamilmohammedi/Azhar-Model-v0.3-Penta-Study is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. It was developed by shamilmohammedi and fine-tuned from the unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit model.

Key Characteristics

  • Base Model: Qwen2.5-7B-Instruct
  • Parameter Count: 7.6 billion parameters
  • Context Length: 32768 tokens
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process.

Intended Use Cases

This model is suitable for a variety of general-purpose natural language processing tasks, benefiting from its Qwen2.5 foundation and instruction-tuning. Its efficient training methodology suggests potential for applications where rapid iteration or deployment on resource-constrained environments is beneficial.