URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.2-Pretrained

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Dec 31, 2025License:apache-2.0Architecture:Transformer Open Weights Warm

URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.2-Pretrained is a Qwen2-based language model developed by URajinda, specifically fine-tuned for the Burmese language. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is an updated version, building upon the URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.0F model, and is optimized for Burmese language tasks.

Loading preview...

Overview

URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.2-Pretrained is a specialized language model developed by URajinda. It is based on the Qwen2 architecture and has been specifically fine-tuned for the Burmese language. This version, v1.2, is an iteration building upon the previously released URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.0F model.

Key Capabilities

  • Burmese Language Processing: The model is explicitly fine-tuned for tasks involving the Burmese language, making it suitable for applications requiring understanding or generation in Burmese.
  • Efficient Training: It leverages Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.

Good For

  • Developers and researchers working on Burmese natural language processing tasks.
  • Applications requiring a language model with specific expertise in Burmese.
  • Projects that can benefit from a Qwen2-based model optimized for a low-resource language.