Ty-Yuki/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 2, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Ty-Yuki/dpo-qwen-cot-merged is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO). This model is specifically optimized to enhance reasoning capabilities through Chain-of-Thought (CoT) and improve structured response quality. It leverages the Unsloth library for DPO training and provides full-merged 16-bit weights, making it suitable for direct use in applications requiring improved logical coherence and structured outputs.

Loading preview...

Model Overview

Ty-Yuki/dpo-qwen-cot-merged is a 4 billion parameter language model built upon the Qwen/Qwen3-4B-Instruct-2507 base model. It has been fine-tuned using Direct Preference Optimization (DPO) via the Unsloth library, focusing on aligning its responses with preferred outputs.

Key Capabilities

  • Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, leading to more logical and coherent outputs.
  • Structured Response Quality: Specifically trained to generate higher quality structured responses based on a preference dataset.
  • Direct Use: Provided as full-merged 16-bit weights, eliminating the need for adapter loading and allowing direct integration with transformers.

Training Details

The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It utilized a maximum sequence length of 1024 and incorporated LoRA configuration (r=8, alpha=16) which has been merged into the base model. The training data used for DPO was sourced from u-10bei/dpo-dataset-qwen-cot.

Good For

This model is particularly well-suited for applications where improved reasoning, logical progression, and structured output formats are critical. Its DPO-based optimization makes it a strong candidate for tasks requiring high-quality, aligned responses.