Vikhrmodels/QVikhr-2.5-1.5B-Instruct-SMPO
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Jan 31, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

QVikhr-2.5-1.5B-Instruct-SMPO is a 1.5 billion parameter instruction-tuned causal language model developed by Vikhrmodels, based on Qwen-2.5-1.5B-Instruct. It is specialized for Russian language tasks while supporting bilingual RU/EN interactions, and has been aligned using Simple Margin Preference Optimization (SMPO) on the GrandMaster-PRO-MAX dataset to enhance response quality.

Loading preview...