Overview
Overview
yentinglin/Mistral-Small-24B-Instruct-2501-reasoning is a 24 billion parameter instruction-tuned model developed by Yenting Lin and funded by Ubitus. It is a fine-tuned version of mistralai/Mistral-Small-24B-Instruct-2501, specifically enhanced for mathematical reasoning. The model was trained using 4x8 H100 GPUs and utilized datasets such as OpenR1-Math-220k and s1K-1.1 to improve its problem-solving capabilities.
Key Capabilities
- Enhanced Mathematical Reasoning: Optimized for complex mathematical tasks, demonstrating significant improvements over its base model.
- Benchmark Performance: Achieves 95.0 Pass@1 on MATH-500 and 53.33 Pass@1 on AIME 2025, outperforming the base Mistral-24B-Instruct and several other 32B models in these specific reasoning benchmarks.
- Instruction-Tuned: Designed to follow instructions effectively for reasoning-focused queries.
Good For
- Mathematical Problem Solving: Ideal for applications requiring strong mathematical reasoning, such as educational tools or research in quantitative fields.
- Benchmarking and Research: Useful for researchers evaluating and developing models for advanced reasoning tasks.
- Complex Query Resolution: Suitable for scenarios where precise, logical deductions are critical.