Fardan/Qwen2.5-1.5B-Instruct-Math-Reasoning-SFT-v1

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Fardan/Qwen2.5-1.5B-Instruct-Math-Reasoning-SFT-v1 is a 1.5 billion parameter instruction-tuned causal language model developed by Fardan, fine-tuned from unsloth/Qwen2.5-1.5B-Instruct. This model is specifically optimized for mathematical reasoning tasks, leveraging a context length of 32768 tokens. It is designed to excel in scenarios requiring robust mathematical problem-solving and logical deduction.

Loading preview...

Model Overview

Fardan/Qwen2.5-1.5B-Instruct-Math-Reasoning-SFT-v1 is a 1.5 billion parameter instruction-tuned language model developed by Fardan. It is fine-tuned from the unsloth/Qwen2.5-1.5B-Instruct base model, utilizing the Unsloth framework for accelerated training. This model is specifically designed and optimized for tasks requiring strong mathematical reasoning and problem-solving capabilities.

Key Characteristics

  • Parameter Count: 1.5 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a substantial context window of 32768 tokens, beneficial for complex reasoning problems.
  • Training Optimization: Fine-tuned using Unsloth, which facilitated a 2x faster training process.
  • Specialization: Primarily focused on enhancing performance in mathematical and reasoning-based tasks.

Ideal Use Cases

  • Mathematical Problem Solving: Suited for applications that involve solving mathematical equations, word problems, and logical puzzles.
  • Reasoning Tasks: Effective in scenarios requiring step-by-step logical deduction and analytical thinking.
  • Educational Tools: Can be integrated into platforms for tutoring or generating explanations for mathematical concepts.