RefalMachine/RuadaptQwen2.5-1.5B-instruct
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Nov 18, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

RefalMachine/RuadaptQwen2.5-1.5B-instruct is a 1.5 billion parameter instruction-tuned language model based on the Qwen2.5 architecture, adapted for the Russian language by RefalMachine. It features a replaced tokenizer and continued pretraining on a Russian corpus, followed by the application of Learned Embedding Propagation (LEP). This adaptation significantly improves the generation speed of Russian texts by up to 60% compared to the original Qwen-2.5-1.5B-Instruct model, making it optimized for Russian language processing tasks.

Loading preview...