yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 15, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

yentinglin/Mistral-Small-24B-Instruct-2501-reasoning is a 24 billion parameter instruction-tuned language model developed by Yenting Lin and funded by Ubitus. Fine-tuned from mistralai/Mistral-Small-24B-Instruct-2501, this model is specifically optimized for mathematical reasoning tasks. It demonstrates enhanced performance on benchmarks like MATH-500 and AIME 2025, making it suitable for complex problem-solving applications.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p