URajinda/qwen1.5b-myanmar-cpt-final
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jan 11, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
URajinda/qwen1.5b-myanmar-cpt-final is a 1.5 billion parameter Continual Pre-trained (CPT) model based on URajinda/ShweYon_Qwen2.5-Burmese-1.5B-v1.2.4, featuring a 131072 token context length. This model is specifically optimized for enhancing Burmese (Myanmar) language capabilities, particularly for spoken and formal text patterns. It utilizes LoRA (Low-Rank Adaptation) for training and is specialized for efficient Burmese vocabulary and token processing, making it ideal for Burmese text generation and as a foundation for Burmese instruction tuning.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–