Losa10/Qwen3-0.6b-test-kimi
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 28, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Losa10/Qwen3-0.6b-test-kimi is a 0.8 billion parameter Qwen3-based causal language model developed by Losa10. This model was finetuned from unsloth/qwen3-0.6b-unsloth-bnb-4bit and optimized for faster training using Unsloth and Huggingface's TRL library. It features a 32768 token context length, making it suitable for applications requiring efficient processing of longer sequences. Its primary differentiator is its optimized training methodology, enabling quicker iteration and deployment.

Loading preview...