junfengzhou/qwen3-14b-rl
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jan 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The junfengzhou/qwen3-14b-rl is a 14 billion parameter language model, fine-tuned by junfengzhou from the OpenPipe/Qwen3-14B-Instruct base model. This model was trained using Unsloth and Huggingface's TRL library, achieving a 2x speed improvement during its fine-tuning process. With a 32768 token context length, it is optimized for tasks requiring efficient processing and generation based on its Qwen3 architecture.

Loading preview...