mistralai/Mistral-Small-3.2-24B-Instruct-2506

Warm
Public
Vision
24B
FP8
32768
License: apache-2.0
Hugging Face
Overview

Mistral-Small-3.2-24B-Instruct-2506 Overview

Mistral-Small-3.2-24B-Instruct-2506 is an updated 24 billion parameter instruction-tuned model from Mistral AI, enhancing its predecessor, Mistral-Small-3.1. This iteration focuses on refining core functionalities crucial for developer applications.

Key Improvements & Capabilities

  • Enhanced Instruction Following: Demonstrates improved accuracy in adhering to precise instructions, as evidenced by significant gains in Wildbench v2 (from 55.6% to 65.33%) and Arena Hard v2 (from 19.56% to 43.1%).
  • Reduced Repetition Errors: Significantly decreases instances of infinite generations or repetitive outputs, showing a 2x reduction in internal testing (from 2.11% to 1.29%).
  • Robust Function Calling: Features a more reliable and robust function calling template, facilitating better integration with tools and external systems.
  • Multimodal Reasoning: Retains vision capabilities, allowing it to process and reason over image inputs, as shown in examples involving image-based decision-making.
  • Strong STEM Performance: Maintains competitive performance in STEM categories, with slight improvements in MMLU Pro (69.06%), MBPP Plus (78.33%), and HumanEval Plus (92.90%).

When to Use This Model

This model is particularly well-suited for use cases demanding high precision in instruction adherence, reliable function calling, and robust multimodal understanding. Its improvements in reducing repetitive outputs also make it ideal for applications requiring concise and controlled generation. It is recommended for developers building applications that require advanced reasoning, tool integration, and multimodal input processing.