yunjae-won/mpq3_qwen4bi_sft
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Apr 6, 2026Architecture:Transformer Cold

The yunjae-won/mpq3_qwen4bi_sft model is a 4 billion parameter instruction-tuned language model developed by yunjae-won, based on the Qwen architecture. It is designed for general language understanding and generation tasks, offering a substantial context length of 32768 tokens. This model is suitable for applications requiring efficient processing of long texts and conversational AI.

Loading preview...