XiaomiMiMo/MiMo-V2-Flash

Warm
Public
310B
FP8
32768
4
Dec 16, 2025
License: mit
Hugging Face

MiMo-V2-Flash by XiaomiMiMo is a 309B total parameter Mixture-of-Experts (MoE) language model with 15B active parameters, designed for high-speed reasoning and agentic workflows. It features a novel hybrid attention architecture and Multi-Token Prediction (MTP) for efficient inference and long-context handling up to 256k tokens. The model excels in complex reasoning tasks and agentic capabilities, including code generation and web development, achieved through advanced post-training techniques like Multi-Teacher On-Policy Distillation (MOPD) and large-scale agentic RL.

Loading preview...

Popular Sampler Settings

Most commonly used values from Featherless users

temperature
This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.
top_p
This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.
top_k
This limits the number of top tokens to consider. Set to -1 to consider all tokens.
frequency_penalty
This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.
presence_penalty
This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.
repetition_penalty
This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.
min_p
This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.