MiniMaxAI/MiniMax-M2
TEXT GENERATIONConcurrency Cost:4Model Size:229BQuant:FP8Ctx Length:32kPublished:Oct 22, 2025License:modified-mitArchitecture:Transformer1.5K Open Weights Warm

MiniMaxAI's MiniMax-M2 is a 229 billion parameter Mixture-of-Experts (MoE) model with 10 billion active parameters, designed for high efficiency in coding and agentic workflows. It features a 32768 token context length and excels in multi-file edits, coding-run-fix loops, and complex toolchain execution across various environments. MiniMax-M2 offers competitive general intelligence, ranking highly among open-source models, while providing lower latency and cost due to its efficient activation size, making it ideal for interactive agents and batched sampling.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p