kannav1331/qwen3-0.6b-sft-merged
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Feb 17, 2026Architecture:Transformer Warm

The kannav1331/qwen3-0.6b-sft-merged model is a 0.8 billion parameter language model, likely based on the Qwen architecture, that has undergone supervised fine-tuning (SFT). This model is designed for general language understanding and generation tasks, leveraging its compact size for efficient deployment. Its fine-tuned nature suggests improved performance on specific conversational or instruction-following applications.

Loading preview...