quyenpro/Qwen-3B-Instruct-Vix-Exic

TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Apr 22, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The quyenpro/Qwen-3B-Instruct-Vix-Exic is a 3.1 billion parameter instruction-tuned causal language model, developed by quyenpro and finetuned from unsloth/Qwen2.5-3B-Instruct-bnb-4bit. This model was optimized for faster training using Unsloth and Huggingface's TRL library, offering a 32768 token context length. It is designed for general instruction-following tasks, leveraging its efficient training methodology.

Loading preview...

Model Overview

The quyenpro/Qwen-3B-Instruct-Vix-Exic is a 3.1 billion parameter instruction-tuned model, developed by quyenpro. It is finetuned from the unsloth/Qwen2.5-3B-Instruct-bnb-4bit base model, leveraging the Qwen2 architecture.

Key Characteristics

  • Efficient Training: This model was trained significantly faster (2x) using Unsloth and Huggingface's TRL library, indicating an optimization for training efficiency.
  • Base Model: Built upon the unsloth/Qwen2.5-3B-Instruct-bnb-4bit model, suggesting a foundation in Qwen2.5's capabilities.
  • Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs and generating more coherent, extended responses.

Use Cases

This model is suitable for a variety of instruction-following applications where a balance between performance and computational efficiency is desired. Its optimized training process makes it a good candidate for scenarios requiring rapid deployment or fine-tuning on specific datasets.