zamber1991/Qwen2.5-1.5B-KTO-Finetuning
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The zamber1991/Qwen2.5-1.5B-KTO-Finetuning model is a 1.5 billion parameter Qwen2.5-based language model, developed by zamber1991. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is optimized for tasks benefiting from efficient finetuning, making it suitable for applications requiring a compact yet capable model.

Loading preview...