Alelcv27/Qwen2.5-3B-Base-Math-v2

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 19, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Qwen2.5-3B-Base-Math-v2 is a 3.1 billion parameter language model developed by Alelcv27, finetuned from unsloth/Qwen2.5-3B-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is specifically optimized for mathematical tasks, making it suitable for applications requiring strong numerical reasoning and problem-solving capabilities within its 32768 token context window.

Loading preview...

Overview

Alelcv27/Qwen2.5-3B-Base-Math-v2 is a 3.1 billion parameter language model developed by Alelcv27. It is finetuned from the unsloth/Qwen2.5-3B-unsloth-bnb-4bit base model and utilizes a 32768 token context length. This model was trained with a focus on mathematical reasoning, leveraging the Unsloth library for accelerated training, which enabled a 2x speed improvement, alongside Huggingface's TRL library.

Key Capabilities

  • Mathematical Reasoning: Optimized for solving mathematical problems and handling numerical tasks.
  • Efficient Training: Benefits from Unsloth's optimizations, resulting in significantly faster training times.
  • Qwen2.5 Architecture: Built upon the robust Qwen2.5 base, providing a strong foundation for language understanding.

Good for

  • Applications requiring strong mathematical problem-solving.
  • Tasks involving numerical analysis and quantitative reasoning.
  • Developers looking for a compact yet capable model for math-centric use cases.