yamanara/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The yamanara/dpo-qwen-cot-merged model is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO). It is specifically optimized to improve reasoning capabilities, particularly Chain-of-Thought (CoT), and structured response quality. This model is designed for applications requiring enhanced logical processing and coherent, well-structured outputs.

Loading preview...

Overview

This model, yamanara/dpo-qwen-cot-merged, is a 4 billion parameter language model derived from Qwen/Qwen3-4B-Instruct-2507. It has been fine-tuned using Direct Preference Optimization (DPO) via the Unsloth library, with its 16-bit weights fully merged, eliminating the need for adapter loading.

Key Capabilities

  • Enhanced Reasoning: Optimized specifically to improve Chain-of-Thought (CoT) reasoning, making it suitable for tasks requiring logical deduction and multi-step problem-solving.
  • Structured Response Quality: The DPO training focused on aligning responses with preferred outputs, leading to more coherent and well-structured generations.
  • Direct Usage: As a full-merged model, it can be used directly with the transformers library without additional configuration for LoRA adapters.

Training Details

The model underwent a single epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1, using a maximum sequence length of 1024. The training data utilized was u-10bei/dpo-dataset-qwen-cot.

Good For

  • Applications requiring improved logical reasoning and problem-solving.
  • Generating structured and high-quality text responses.
  • Developers seeking a readily deployable, DPO-optimized Qwen3-4B variant for reasoning tasks.