Alelcv27/Qwen2.5-3B-INST-Math-v2

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Qwen2.5-3B-INST-Math-v2 is a 3.1 billion parameter Qwen2.5 instruction-tuned causal language model developed by Alelcv27. This model was finetuned using Unsloth and Huggingface's TRL library, building upon the unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit base. With a 32768 token context length, it is optimized for mathematical tasks and reasoning.

Loading preview...

Overview

Alelcv27/Qwen2.5-3B-INST-Math-v2 is a 3.1 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. Developed by Alelcv27, this model was finetuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit using the Unsloth library and Huggingface's TRL. The finetuning process with Unsloth enabled a 2x faster training time.

Key Capabilities

  • Efficient Finetuning: Leverages Unsloth for accelerated training, making it a resource-efficient option for specific tasks.
  • Instruction-Following: Designed to follow instructions effectively, suitable for various prompt-based applications.
  • Mathematical Focus: While not explicitly detailed in the README, the model name suggests an optimization for mathematical tasks.

Good For

  • Developers seeking a compact (3.1B parameters) yet capable instruction-tuned model.
  • Applications requiring efficient inference due to its smaller size.
  • Use cases that benefit from a model finetuned with performance-enhancing libraries like Unsloth.
  • Potential use in mathematical problem-solving or reasoning tasks, given its naming convention.