Hyeongwon/P2-split2_prob_ascii_normalized_Qwen3-4B-Base_0330-01
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 30, 2026Architecture:Transformer Cold

Hyeongwon/P2-split2_prob_ascii_normalized_Qwen3-4B-Base_0330-01 is a 4 billion parameter language model fine-tuned by Hyeongwon from the Qwen3-4B-Base architecture. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed for text generation tasks, building upon its base model's capabilities. The model has a context length of 32768 tokens.

Loading preview...