hariharanv04/qwen3-4b-instruct-75k-int
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 17, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The hariharanv04/qwen3-4b-instruct-75k-int is a 4 billion parameter instruction-tuned Qwen3 model developed by hariharanv04. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is optimized for instruction-following tasks, leveraging its efficient training methodology.
Loading preview...
Model Overview
The hariharanv04/qwen3-4b-instruct-75k-int is a 4 billion parameter instruction-tuned model based on the Qwen3 architecture. Developed by hariharanv04, this model was fine-tuned from unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit using Unsloth and Huggingface's TRL library. This approach allowed for a significantly faster training process, specifically noted as 2x faster.
Key Capabilities
- Instruction Following: Designed and fine-tuned to excel at understanding and executing instructions.
- Efficient Training: Benefits from the Unsloth library, which optimizes the training process for speed.
- Qwen3 Architecture: Leverages the robust capabilities of the Qwen3 base model.
Good For
- Applications requiring a compact yet capable instruction-tuned model.
- Scenarios where efficient deployment and inference of a 4B parameter model are crucial.
- Tasks that benefit from a model trained with accelerated methods like Unsloth.