yentinglin/Mistral-Small-24B-Instruct-2501-reasoning is a 24 billion parameter instruction-tuned language model developed by Yenting Lin and funded by Ubitus. Fine-tuned from mistralai/Mistral-Small-24B-Instruct-2501, this model is specifically optimized for mathematical reasoning tasks. It demonstrates enhanced performance on benchmarks like MATH-500 and AIME 2025, making it suitable for complex problem-solving applications.
No reviews yet. Be the first to review!