hauhuu/qwen_finetune_16bit
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 26, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The hauhuu/qwen_finetune_16bit is a 4 billion parameter Qwen3-based instruction-tuned language model developed by hauhuu. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. It is optimized for tasks typically handled by instruction-following models, leveraging its Qwen3 architecture for general language understanding and generation.

Loading preview...