LucasjsBatista/qwen2.5-3b-irpf2026

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The LucasjsBatista/qwen2.5-3b-irpf2026 is a 3.1 billion parameter Qwen2.5-based causal language model, fine-tuned by LucasjsBatista. This model was efficiently trained using Unsloth and Huggingface's TRL library, building upon the unsloth/Qwen2.5-3B-Instruct-bnb-4bit base. Its primary differentiation lies in its optimized training process, making it suitable for applications requiring a compact yet capable instruction-tuned model.

Loading preview...

Model Overview

The LucasjsBatista/qwen2.5-3b-irpf2026 is a 3.1 billion parameter instruction-tuned language model, developed by LucasjsBatista. It is fine-tuned from the unsloth/Qwen2.5-3B-Instruct-bnb-4bit base model, leveraging the Unsloth library and Huggingface's TRL for accelerated training.

Key Characteristics

  • Base Model: Qwen2.5-3B-Instruct architecture.
  • Parameter Count: 3.1 billion parameters.
  • Training Efficiency: Utilizes Unsloth for a 2x faster fine-tuning process, making it an efficient choice for deployment.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is particularly well-suited for:

  • Applications requiring a compact, instruction-following language model.
  • Scenarios where efficient training and deployment of a Qwen2.5-based model are critical.
  • Tasks benefiting from a fine-tuned model with a smaller footprint.