dlstuharu/dpo-qwen-cot-merged

TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The dlstuharu/dpo-qwen-cot-merged model is a 4 billion parameter Qwen3-based causal language model, fine-tuned using Direct Preference Optimization (DPO) to enhance reasoning capabilities and structured response quality. It leverages a 32,768 token context length and is specifically optimized for improving Chain-of-Thought (CoT) reasoning. This model is designed for applications requiring high-quality, aligned outputs in reasoning-intensive tasks.

Loading preview...

Overview

This model, dlstuharu/dpo-qwen-cot-merged, is a 4 billion parameter language model built upon the Qwen3-4B-Instruct-2507 base. It has been fine-tuned using Direct Preference Optimization (DPO) via the Unsloth library, with its full 16-bit weights merged for direct use without adapter loading.

Key Capabilities

  • Enhanced Reasoning: Optimized specifically to improve Chain-of-Thought (CoT) reasoning through DPO training.
  • Structured Response Quality: Aligned to produce preferred outputs, focusing on better structured and higher-quality responses.
  • Efficient Deployment: Provided as a full-merged model, simplifying integration into existing transformers workflows.

Training Details

The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It utilized a maximum sequence length of 1024 during training. The preference dataset used for alignment was u-10bei/dpo-dataset-qwen-cot.

Good For

  • Applications requiring improved logical reasoning and problem-solving.
  • Scenarios where structured and high-quality output generation is critical.
  • Developers seeking a Qwen3-based model with enhanced alignment for specific response characteristics.