y22ma/Qwen3-14B-finetune
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jan 9, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The y22ma/Qwen3-14B-finetune is a 14 billion parameter causal language model developed by y22ma, fine-tuned from unsloth/Qwen3-14B. This model leverages Unsloth and Huggingface's TRL library for accelerated training, achieving 2x faster finetuning. It is designed for general-purpose language tasks, benefiting from the efficiency gains in its development process.

Loading preview...