gjyotin305/Qwen2.5-3B-Instruct_adaptive_tune_no_ref

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 26, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The gjyotin305/Qwen2.5-3B-Instruct_adaptive_tune_no_ref is a 3.1 billion parameter instruction-tuned causal language model, finetuned by gjyotin305 from the Qwen2.5-3B-Instruct base. This model was optimized for faster training using Unsloth and Huggingface's TRL library, making it suitable for applications requiring efficient deployment of Qwen2.5-based models. It maintains a 32768 token context length, focusing on general instruction-following tasks.

Loading preview...

Overview

This model, gjyotin305/Qwen2.5-3B-Instruct_adaptive_tune_no_ref, is an instruction-tuned variant of the Qwen2.5-3B-Instruct base model, developed by gjyotin305. It features 3.1 billion parameters and supports a substantial context length of 32768 tokens, making it capable of handling complex and lengthy prompts.

Key Characteristics

  • Base Model: Finetuned from unsloth/Qwen2.5-3B-Instruct.
  • Efficient Training: Leverages Unsloth and Huggingface's TRL library for significantly faster training (2x speedup).
  • Parameter Count: A compact 3.1 billion parameters, balancing performance with computational efficiency.
  • Context Window: Offers a large 32768 token context, suitable for detailed conversations and document processing.

Use Cases

This model is particularly well-suited for developers looking for an efficiently trained Qwen2.5-based model. Its optimized training process suggests it could be a good choice for:

  • General instruction-following tasks.
  • Applications where rapid iteration and deployment of finetuned models are crucial.
  • Scenarios requiring a balance between model size and performance for text generation and understanding.