damianGil/Qwen2.5-1.5B-KTO-Finetuning

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The damianGil/Qwen2.5-1.5B-KTO-Finetuning model is a 1.5 billion parameter Qwen2.5-based language model, fine-tuned by damianGil. It was developed using Unsloth and Huggingface's TRL library, enabling faster training. This model is optimized for instruction-following tasks, leveraging its Qwen2.5 architecture and efficient fine-tuning process.

Loading preview...

Model Overview

The damianGil/Qwen2.5-1.5B-KTO-Finetuning is a 1.5 billion parameter language model based on the Qwen2.5 architecture. It was developed by damianGil and fine-tuned from the unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit base model.

Key Characteristics

  • Architecture: Qwen2.5-based, a causal language model known for its performance.
  • Parameter Count: 1.5 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context length of 32768 tokens, allowing for processing longer inputs.

Intended Use Cases

This model is suitable for various instruction-following tasks, benefiting from its efficient fine-tuning and the robust Qwen2.5 base. Its optimized training process makes it a good candidate for applications requiring a capable yet relatively compact language model.