111iillil11iil/qwen25_1_5b_korean_unsloth
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Loading
The 111iillil11iil/qwen25_1_5b_korean_unsloth model is a 1.5 billion parameter Qwen2.5-based causal language model, finetuned by 111iillil11iil. It was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for efficient deployment and performance, leveraging its smaller parameter count and accelerated training methodology.
Loading preview...