koutch/short_paper_qwen_qwen3-instruct-4b_train_sft_all_train_no_think
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The koutch/short_paper_qwen_qwen3-instruct-4b_train_sft_all_train_no_think is a 4 billion parameter Qwen3 instruction-tuned causal language model developed by koutch. Fine-tuned from unsloth/Qwen3-4B-Instruct-2507, this model was trained significantly faster using Unsloth and Hugging Face's TRL library. It is designed for general instruction-following tasks, leveraging its efficient training methodology for practical applications.
Loading preview...