unsloth/Mistral-Small-24B-Instruct-2501
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jan 30, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The unsloth/Mistral-Small-24B-Instruct-2501 is a 24 billion parameter instruction-tuned language model developed by Mistral AI, based on the Mistral-Small-24B-Base-2501 architecture. It features a 32k context window and is optimized for agentic capabilities, including native function calling and JSON outputting. This model excels in conversational and reasoning tasks, supporting dozens of languages, and is designed for efficient local deployment on hardware like an RTX 4090 or 32GB RAM MacBook.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p