Hyeongwon/PH_prob_Qwen3-8B_0304-01
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 4, 2026Architecture:Transformer Cold

Hyeongwon/PH_prob_Qwen3-8B_0304-01 is an 8 billion parameter language model fine-tuned from ChuGyouk/Qwen3-8B-Base. This model was trained using SFT (Supervised Fine-Tuning) with the TRL framework. It is designed for general text generation tasks, leveraging its Qwen3 architecture and 32K context length.

Loading preview...