URajinda/qwen1.5b-myanmar-cpt-final1
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
URajinda/qwen1.5b-myanmar-cpt-final1 is a 1.5 billion parameter Qwen2-based causal language model developed by URajinda, fine-tuned from unsloth/qwen2.5-1.5b-bnb-4bit. It was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. This model is optimized for specific tasks related to Myanmar language, leveraging its efficient training methodology.
Loading preview...
Overview
URajinda/qwen1.5b-myanmar-cpt-final1 is a 1.5 billion parameter language model, fine-tuned by URajinda from the unsloth/qwen2.5-1.5b-bnb-4bit base model. This model leverages the Unsloth library and Huggingface's TRL for its training process, which significantly accelerated the fine-tuning by a factor of two.
Key Capabilities
- Efficient Training: Utilizes Unsloth for 2x faster fine-tuning.
- Myanmar Language Focus: Specifically fine-tuned for applications involving the Myanmar language.
- Qwen2 Architecture: Built upon the robust Qwen2 model family.
Good for
- Applications requiring a compact yet capable model for Myanmar language processing.
- Developers looking for a model trained with efficient methods like Unsloth.
- Use cases where faster fine-tuning is a critical advantage.