Vikhrmodels/QVikhr-3-8B-Instruction
QVikhr-3-8B-Instruction is an 8 billion parameter instruction-tuned causal language model developed by Vikhrmodels, based on the Qwen3-8B architecture. It is specifically fine-tuned on the GrandMaster2 dataset for highly effective text processing in both Russian and English. This model excels at generating instructions, providing contextual responses, and text analysis, demonstrating strong performance in mathematical and physical tasks in Russian.
Loading preview...
QVikhr-3-8B-Instruction: Bilingual LLM for Russian and English
QVikhr-3-8B-Instruction is an 8 billion parameter instruction-tuned language model developed by Vikhrmodels, built upon the robust Qwen3-8B architecture. This model is uniquely specialized for bilingual (Russian and English) text processing, having undergone Supervised Fine-Tuning (SFT) on the extensive synthetic Russian dataset, GrandMaster2.
Key Capabilities
- Bilingual Proficiency: Optimized for high-efficiency text processing, instruction generation, contextual responses, and text analysis in both Russian and English.
- Enhanced Performance: Achieves a DOoM score of 0.445, significantly outperforming its base model, Qwen3-8B (0.417), and approaching GPT-4.1's score (0.466) in mathematical and physical tasks.
- Instruction Following: Designed for instruction-based learning tasks, making it suitable for generating precise and context-aware responses.
Good for
- Applications requiring accurate and fast text processing in Russian and English.
- Instruction-based tasks and contextual text analysis.
- Integration into professional environments and custom applications where strong bilingual capabilities, especially in Russian, are crucial.
Quantized variants (GGUF, MLX 4-bit, MLX 8-bit) are also available for optimized deployment.