atsuki-yamaguchi/Qwen2.5-7B-Instruct-my-madlad-mean-tuned
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Nov 22, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The atsuki-yamaguchi/Qwen2.5-7B-Instruct-my-madlad-mean-tuned model is a 7.6 billion parameter instruction-tuned causal language model, continually pre-trained from Qwen2.5-7B-Instruct. Developed by Atsuki Yamaguchi, it is specifically adapted for the Burmese language, featuring an expanded 10K target vocabulary initialized with mean weights. This model excels in Burmese language processing, making it suitable for applications requiring strong performance in that specific language.

Loading preview...