VikhrModels' Vikhr-Nemo-12B-Instruct-R-21-09-24 is a 12 billion parameter unimodal LLM, an enhanced version of Mistral-Nemo-Instruct-2407, primarily adapted for Russian and English. It features a 32768-token context length and is optimized for reasoning, summarization, code generation, roleplay, dialogue, and high-performance RAG capabilities. The model was trained using SFT and SMPO, a custom DPO variation, and includes a unique Grounded RAG mode for document-based question answering.
No reviews yet. Be the first to review!