Hyeongwon/P2-split2_prob_Qwen3-14B-Base_0405
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Apr 5, 2026Architecture:Transformer Cold

Hyeongwon/P2-split2_prob_Qwen3-14B-Base_0405 is a 14 billion parameter language model fine-tuned from Qwen/Qwen3-14B-Base. Developed by Hyeongwon, this model leverages the TRL framework for its training procedure. It is designed for general text generation tasks, building upon the foundational capabilities of the Qwen3-14B-Base architecture with a 32768 token context length.

Loading preview...