pkun2/qwen3_16bit_kr
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The pkun2/qwen3_16bit_kr is an 8 billion parameter Qwen3-based causal language model developed by pkun2, fine-tuned from unsloth/qwen3-8b-unsloth-bnb-4bit. This model was optimized for training speed using Unsloth and Huggingface's TRL library, achieving 2x faster training. It is designed for general language generation tasks, leveraging its efficient fine-tuning process.

Loading preview...