racineai/Berthier-Mistral-Military-24B

VISIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Apr 22, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Berthier-Mistral-Military-24B by Racine.ai is a 24-billion-parameter language model built on Mistral-Small-3.1-24B-Base-2503, specialized for the defense and security domain. It was continued-pretrained on approximately 4 billion tokens of open-source military content across 30 languages, with a focus on French and NATO military doctrine. This model excels in military-specific knowledge, outperforming its base model by 6.7 percentage points on the Hard Mil Bench evaluation. It is intended for research and educational use in defense policy, strategic studies, and military history.

Loading preview...

Berthier-Mistral-Military-24B: Specialized for Defense and Security

Berthier-Mistral-Military-24B is a 24-billion-parameter language model developed by Racine.ai, specifically tailored for the defense and security sector. It is based on mistralai/Mistral-Small-3.1-24B-Base-2503 and underwent continued pre-training on a vast multilingual corpus of open-source military content, totaling approximately 4 billion tokens across 30 languages, with a strong emphasis on French and English.

Key Capabilities and Features

  • Domain Specialization: Highly proficient in French and NATO military doctrine, equipment, and institutional vocabulary.
  • Performance: Achieves a 6.7 percentage point improvement over its base model on the Hard Mil Bench, scoring 57.6%. It matches or surpasses larger, newer open-source models like Gemma 4 31B and Qwen 3.5 27B on this specialized benchmark.
  • Multilingual Support: Trained on content in 30 languages, ensuring broad coverage of global military information.
  • Instruction Following: Utilizes a TIES merge with the instruct variant of the base model to preserve strong conversational and instruction-following abilities alongside its domain knowledge.

Intended Use Cases

This model is designed for research and educational purposes only within the defense and security domain. Potential applications include:

  • Assisting research in defense policy, strategic studies, and military history.
  • Serving as an educational tool for higher military education institutions.
  • Providing revision aid for officer trainees and students in military academies.
  • Acting as a reference benchmark for research on domain adaptation of large language models.

Out-of-scope uses explicitly include generating target lists, informing target-selection decisions, or integration into fire-control/autonomous weapon systems. Users are responsible for ensuring compliance with International Humanitarian Law for any use in armed conflict.