Hyeongwon/P2-split5_prob_Qwen3-4B-Base_0312-01

Loading
Public
4B
BF16
32768
1
Mar 13, 2026
Hugging Face

Hyeongwon/P2-split5_prob_Qwen3-4B-Base_0312-01 is a 4 billion parameter language model, fine-tuned from Hyeongwon/Qwen3-4B-Base using Supervised Fine-Tuning (SFT) with the TRL framework. This model is designed for general text generation tasks, building upon the foundational capabilities of the Qwen3-4B-Base architecture. It offers a 32768 token context length, making it suitable for applications requiring processing of moderately long inputs and generating coherent responses.

No reviews yet. Be the first to review!