Koalacrown/qwen3-14b-cold-start-merged-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Mar 11, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Koalacrown/qwen3-14b-cold-start-merged-16bit is a 14 billion parameter Qwen3-based causal language model developed by Koalacrown. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language generation tasks, leveraging its 32768 token context length for comprehensive understanding and response generation.

Loading preview...