Theerath2005/qwen_finetune_16bit
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Feb 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Theerath2005/qwen_finetune_16bit is a 14 billion parameter Qwen3 model, developed by Theerath2005, that has been finetuned for enhanced performance. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x speed improvement during the finetuning process. It is designed for general language tasks, leveraging its Qwen3 architecture and optimized training methodology.
Loading preview...