gjyotin305/Qwen2.5-3B-Instruct_old_sft_alpaca_005

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The gjyotin305/Qwen2.5-3B-Instruct_old_sft_alpaca_005 is a 3.1 billion parameter instruction-tuned causal language model, fine-tuned by gjyotin305 from unsloth/Qwen2.5-3B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. It is designed for general instruction-following tasks, leveraging its 32768 token context length for processing longer inputs.

Loading preview...

Model Overview

The gjyotin305/Qwen2.5-3B-Instruct_old_sft_alpaca_005 is a 3.1 billion parameter instruction-tuned language model. Developed by gjyotin305, this model is a fine-tuned version of unsloth/Qwen2.5-3B-Instruct.

Key Characteristics

  • Base Model: Fine-tuned from the Qwen2.5-3B-Instruct architecture.
  • Training Efficiency: Utilizes Unsloth and Huggingface's TRL library for accelerated fine-tuning, reportedly achieving 2x faster training.
  • Context Length: Supports a substantial context window of 32768 tokens, suitable for handling extensive conversational histories or long documents.

Use Cases

This model is primarily suited for general instruction-following tasks, benefiting from its instruction-tuned nature and large context window. Its efficient fine-tuning process suggests potential for rapid adaptation to specific downstream applications.