Shiyu-Lab/HarnessLLM_SFT_Qwen3_4B
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Nov 2, 2025License:mitArchitecture:Transformer Open Weights Warm

HarnessLLM_SFT_Qwen3_4B is a 4 billion parameter language model developed by Shiyu-Lab, based on the Qwen3 architecture. This model is instruction-tuned, designed for general-purpose language understanding and generation tasks. With a notable context length of 32768 tokens, it is suitable for applications requiring processing of extensive textual inputs.

Loading preview...