Alelcv27/Qwen2.5-3B-INST-Code

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 17, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Qwen2.5-3B-INST-Code is a 3.1 billion parameter instruction-tuned causal language model, finetuned by Alelcv27 from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. With a 32768 token context length, it is optimized for instruction-following tasks, particularly those benefiting from its efficient training methodology.

Loading preview...

Model Overview

Alelcv27/Qwen2.5-3B-INST-Code is a 3.1 billion parameter instruction-tuned language model, developed by Alelcv27. It is finetuned from the unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit base model, leveraging the Unsloth library in conjunction with Huggingface's TRL library.

Key Characteristics

  • Efficient Training: This model was trained significantly faster, achieving a 2x speedup, due to the utilization of Unsloth's optimization techniques.
  • Instruction-Tuned: Designed to follow instructions effectively, making it suitable for a variety of conversational and task-oriented applications.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs and maintaining coherence over extended interactions.

Use Cases

This model is particularly well-suited for developers looking for an instruction-following model that benefits from optimized training. Its efficient development process suggests potential for rapid iteration and deployment in applications requiring a capable 3B parameter model.