gjyotin305/Qwen2.5-3B-Instruct_new_alpaca_003

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Jan 13, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The gjyotin305/Qwen2.5-3B-Instruct_new_alpaca_003 is a 3.1 billion parameter instruction-tuned causal language model developed by gjyotin305. This model is finetuned from unsloth/Qwen2.5-3B-Instruct and was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general instruction-following tasks, leveraging its Qwen2.5 architecture and 32K context length.

Loading preview...

Model Overview

The gjyotin305/Qwen2.5-3B-Instruct_new_alpaca_003 is an instruction-tuned large language model with approximately 3.1 billion parameters. It is built upon the Qwen2.5 architecture and offers a substantial context length of 32,768 tokens.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen2.5-3B-Instruct.
  • Training Efficiency: This model was trained significantly faster using the Unsloth library in conjunction with Huggingface's TRL library, highlighting an optimized training methodology.
  • Developer: Developed by gjyotin305.
  • License: Released under the Apache-2.0 license, allowing for broad usage and distribution.

Intended Use Cases

This model is suitable for a variety of instruction-following tasks, benefiting from its instruction-tuned nature and efficient training. Its 3.1B parameter count makes it a good candidate for applications where computational resources are a consideration, while its large context window supports processing longer inputs.