Hyeongwon/PS_only_answer_Qwen3-4B-Base_0328-01-2e-5
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Warm

Hyeongwon/PS_only_answer_Qwen3-4B-Base_0328-01-2e-5 is a 4 billion parameter language model fine-tuned by Hyeongwon from the Qwen3-4B-Base architecture. This model was trained using Supervised Fine-Tuning (SFT) with the TRL framework. It is designed to generate specific answers, as indicated by its name, and supports a context length of 32768 tokens.

Loading preview...