unsloth/Mistral-Small-24B-Base-2501
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jan 30, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Mistral-Small-24B-Base-2501 is a 24 billion parameter base language model developed by Mistral AI, serving as the foundation for the instruction-tuned Mistral Small 3. This model is designed to be exceptionally "knowledge-dense" and capable of local deployment, fitting on a single RTX 4090 or a 32GB RAM MacBook once quantized. It features a 32k context window and a 131k vocabulary Tekken tokenizer, making it suitable for various applications requiring efficient, powerful language processing.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p