Vikhrmodels/QVikhr-3-8B-Instruction

Warm
Public
8B
FP8
32768
License: apache-2.0
Hugging Face
Overview

QVikhr-3-8B-Instruction: Bilingual LLM for Russian and English

QVikhr-3-8B-Instruction is an 8 billion parameter instruction-tuned language model developed by Vikhrmodels, built upon the robust Qwen3-8B architecture. This model is uniquely specialized for bilingual (Russian and English) text processing, having undergone Supervised Fine-Tuning (SFT) on the extensive synthetic Russian dataset, GrandMaster2.

Key Capabilities

  • Bilingual Proficiency: Optimized for high-efficiency text processing, instruction generation, contextual responses, and text analysis in both Russian and English.
  • Enhanced Performance: Achieves a DOoM score of 0.445, significantly outperforming its base model, Qwen3-8B (0.417), and approaching GPT-4.1's score (0.466) in mathematical and physical tasks.
  • Instruction Following: Designed for instruction-based learning tasks, making it suitable for generating precise and context-aware responses.

Good for

  • Applications requiring accurate and fast text processing in Russian and English.
  • Instruction-based tasks and contextual text analysis.
  • Integration into professional environments and custom applications where strong bilingual capabilities, especially in Russian, are crucial.

Quantized variants (GGUF, MLX 4-bit, MLX 8-bit) are also available for optimized deployment.