qqo/dpo-qwen-cot-merged

TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Feb 17, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

qqo/dpo-qwen-cot-merged is a fine-tuned Qwen1.5-Instruct model, optimized using Direct Preference Optimization (DPO) via the Unsloth library. This model focuses on enhancing reasoning capabilities through Chain-of-Thought (CoT) and improving structured response quality. It provides full-merged 16-bit weights, making it suitable for direct use in applications requiring improved logical coherence and structured output.

Loading preview...

Overview

This model, qqo/dpo-qwen-cot-merged, is a specialized fine-tune of the Qwen/qwen1.5-Instruct base model. It leverages Direct Preference Optimization (DPO), implemented with the Unsloth library, to refine its response generation.

Key Capabilities

  • Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, leading to more logical and coherent outputs.
  • Structured Output Quality: Specifically trained to produce higher quality structured responses, aligning with preferred output formats.
  • Direct Use: Provided as a full-merged 16-bit model, eliminating the need for adapter loading and simplifying deployment with transformers.

Training Details

The model underwent DPO training for 1 epoch with a learning rate of 1e-07 and a beta value of 0.1. It utilized a maximum sequence length of 1024 and an LoRA configuration of r=8, alpha=16 which has been merged into the base model. The training data for DPO was sourced from the u-10bei/dpo-dataset-qwen-cot dataset.

Good For

  • Applications requiring improved logical reasoning and step-by-step thought processes.
  • Scenarios where structured and high-quality output formats are critical.
  • Developers looking for a readily deployable Qwen1.5-Instruct variant with enhanced DPO-driven performance in reasoning and structured generation.