Hyeongwon/P2-split2_prob_Qwen3-4B-Base_0317-01
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 17, 2026Architecture:Transformer Warm

Hyeongwon/P2-split2_prob_Qwen3-4B-Base_0317-01 is a 4 billion parameter language model, fine-tuned by Hyeongwon from the Qwen3-4B-Base architecture. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework, building upon its base model's capabilities. It is designed for text generation tasks, offering a 32768 token context length for processing longer inputs.

Loading preview...