OkamotoJP/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

OkamotoJP/dpo-qwen-cot-merged is a 4 billion parameter Qwen3-based causal language model fine-tuned using Direct Preference Optimization (DPO) via the Unsloth library. This model specializes in improving reasoning capabilities through Chain-of-Thought (CoT) and generating structured responses. It is optimized for tasks requiring aligned, high-quality outputs based on preference datasets, offering a merged 16-bit weight model for direct use.

Loading preview...

Model Overview

This model, OkamotoJP/dpo-qwen-cot-merged, is a 4 billion parameter language model based on Unsloth/Qwen3-4B-Instruct-2507. It has been fine-tuned using Direct Preference Optimization (DPO) with the Unsloth library to enhance its response quality and alignment.

Key Capabilities

  • Improved Reasoning: Optimized to generate better Chain-of-Thought (CoT) reasoning, leading to more logical and structured outputs.
  • Preference Alignment: Trained with DPO to align its responses with preferred examples, resulting in higher quality and more desirable outputs.
  • Direct Use: Provided as a full-merged 16-bit weight model, eliminating the need for adapter loading and simplifying deployment with transformers.

Training Details

The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It utilized a maximum sequence length of 1024 and incorporated LoRA configurations (r=8, alpha=16) which have been merged into the base model. The training data used was u-10bei/dpo-dataset-qwen-cot.

Licensing

This model is released under the MIT License, consistent with its training dataset. Users must also adhere to the original base model's license terms.