Alelcv27/Qwen2.5-3B-INST-Math

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 16, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Qwen2.5-3B-INST-Math is a 3.1 billion parameter Qwen2.5-based instruction-tuned causal language model developed by Alelcv27. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Overview

Alelcv27/Qwen2.5-3B-INST-Math is a 3.1 billion parameter instruction-tuned model based on the Qwen2.5 architecture. Developed by Alelcv27, this model was fine-tuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit using the Unsloth library and Huggingface's TRL library. This combination allowed for a significantly faster training process, specifically noted as 2x faster.

Key Characteristics

  • Base Model: Qwen2.5-3B-Instruct
  • Parameter Count: 3.1 billion
  • Training Efficiency: Fine-tuned 2x faster using Unsloth and Huggingface's TRL library.
  • Developer: Alelcv27
  • License: Apache-2.0

Use Cases

This model is suitable for general instruction-following tasks where a compact yet capable language model is required. Its efficient training process suggests it could be a good candidate for applications needing rapid iteration or deployment on resource-constrained environments, while still benefiting from the Qwen2.5 architecture's capabilities.