Nmoro/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Nmoro/dpo-qwen-cot-merged is a 4 billion parameter Qwen3-based causal language model fine-tuned using Direct Preference Optimization (DPO) via Unsloth. This model is optimized for improved reasoning, specifically Chain-of-Thought (CoT), and enhanced structured response quality. It leverages a 32768-token context length and is designed for tasks requiring coherent logical progression and well-structured outputs.

Loading preview...

Model Overview

Nmoro/dpo-qwen-cot-merged is a 4 billion parameter language model built upon the Qwen3-4B-Instruct-2507 base model. It has been fine-tuned using Direct Preference Optimization (DPO), a method designed to align model responses with preferred human feedback, enhancing overall quality and utility. The fine-tuning process utilized the Unsloth library and resulted in a full-merged 16-bit weight model, eliminating the need for adapter loading.

Key Capabilities

  • Enhanced Reasoning: Optimized specifically to improve Chain-of-Thought (CoT) reasoning, allowing for more logical and step-by-step problem-solving.
  • Structured Response Quality: Focuses on generating higher quality and more structured outputs based on preference datasets.
  • Direct Preference Optimization (DPO): Leverages DPO for alignment, which directly optimizes a policy against a preference dataset without requiring a separate reward model.
  • Efficient Deployment: Provided as a merged 16-bit model, simplifying deployment with standard transformers library usage.

Training Details

The model underwent 1 epoch of DPO training with a learning rate of 5e-07 and a beta value of 0.1. It was trained with a maximum sequence length of 1024 tokens, using a LoRA configuration (r=8, alpha=16) that was subsequently merged into the base weights. The training data used was the u-10bei/dpo-dataset-qwen-cot dataset.