yusufcelebi/qwen3-4b-full-lora-step-180
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 21, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The yusufcelebi/qwen3-4b-full-lora-step-180 model is a 4 billion parameter Qwen3-based language model developed by yusufcelebi. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for efficient performance, leveraging its Qwen3 architecture and LoRA fine-tuning for specific applications.
Loading preview...
Model Overview
The yusufcelebi/qwen3-4b-full-lora-step-180 is a 4 billion parameter language model based on the Qwen3 architecture. Developed by yusufcelebi, this model has been fine-tuned using the Unsloth library in conjunction with Huggingface's TRL library. A key characteristic of its development is the reported 2x faster training speed achieved through this methodology.
Key Capabilities
- Efficient Fine-tuning: Leverages Unsloth for accelerated training, making it suitable for rapid iteration and deployment.
- Qwen3 Architecture: Built upon the robust Qwen3 base model, providing a strong foundation for various NLP tasks.
- LoRA Integration: Utilizes Low-Rank Adaptation (LoRA) for efficient parameter updates during fine-tuning, reducing computational overhead.
Good For
- Developers seeking a Qwen3-based model that has undergone efficient, accelerated fine-tuning.
- Applications where a 4 billion parameter model with LoRA adaptations can provide a balance of performance and resource efficiency.
- Experimentation with models trained using Unsloth's optimized training techniques.