D-AT2025/dpo-qwen-cot-merged_120steps

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 1, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

D-AT2025/dpo-qwen-cot-merged_120steps is a 4 billion parameter Qwen3-based causal language model fine-tuned by D-AT2025 using Direct Preference Optimization (DPO). It is specifically optimized to enhance reasoning capabilities through Chain-of-Thought (CoT) and improve the quality of structured responses. This model is suitable for applications requiring aligned, high-quality outputs in reasoning and structured text generation tasks.

Loading preview...

Model Overview

This model, dpo-qwen-cot-merged_120steps, is a 4 billion parameter language model developed by D-AT2025. It is a fine-tuned version of the Qwen/Qwen3-4B-Instruct-2507 base model, utilizing Direct Preference Optimization (DPO) via the Unsloth library. The primary objective of this fine-tuning was to align the model's responses with preferred outputs, specifically focusing on improving reasoning (Chain-of-Thought) and the overall quality of structured responses.

Key Capabilities

  • Enhanced Reasoning: Optimized for better Chain-of-Thought capabilities.
  • Improved Response Quality: Aligned to produce higher quality, structured outputs.
  • DPO Fine-tuning: Benefits from Direct Preference Optimization for preference alignment.
  • Merged Weights: Provided as full-merged 16-bit weights, eliminating the need for adapter loading.

Training Details

The model was trained for 1 epoch with a learning rate of 1e-07 and a beta value of 0.1. The maximum sequence length used during training was 1024. The training data utilized for DPO was u-10bei/dpo-dataset-qwen-cot.

Good For

  • Applications requiring improved reasoning abilities.
  • Tasks demanding high-quality, structured text generation.
  • Scenarios where preference alignment is crucial for model output.