hikahika/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 7, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The hikahika/dpo-qwen-cot-merged model is a 4 billion parameter Qwen3-based instruction-tuned causal language model, fine-tuned using Direct Preference Optimization (DPO) via Unsloth. It is specifically optimized to improve reasoning capabilities through Chain-of-Thought (CoT) and enhance structured response quality. This model is designed for tasks requiring improved logical reasoning and coherent, well-structured outputs.

Loading preview...

Model Overview

The hikahika/dpo-qwen-cot-merged model is a 4 billion parameter language model built upon the Qwen3-4B-Instruct-2507 base model. It has been fine-tuned using Direct Preference Optimization (DPO) with the Unsloth library, specifically targeting enhanced reasoning (Chain-of-Thought) and improved quality of structured responses.

Key Capabilities

  • Enhanced Reasoning: Optimized to produce more logical and coherent Chain-of-Thought reasoning.
  • Structured Responses: Improves the quality and structure of generated outputs through DPO alignment.
  • Direct Use: Provided as a full-merged 16-bit weight model, requiring no adapter loading for direct use with transformers.

Training Details

The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. The maximum sequence length used during training was 1024 tokens. The LoRA configuration (r=8, alpha=16) was merged into the base model.

Good For

  • Applications requiring improved logical reasoning.
  • Generating structured and high-quality text responses.
  • Tasks where Chain-of-Thought prompting is beneficial.

License

This model is distributed under the MIT License, consistent with the terms of its training dataset (u-10bei/dpo-dataset-qwen-cot). Users must also adhere to the original base model's license terms.