Model Overview
Taiko56/dpo-qwen-cot-merged is a 4-billion parameter language model built upon the Qwen/Qwen3-4B-Instruct-2507 base model. It has been fine-tuned using Direct Preference Optimization (DPO), a method that aligns model responses with preferred outputs, enhancing their quality and relevance.
Key Capabilities
- Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, allowing for more structured and logical problem-solving.
- Improved Response Quality: DPO training focuses on aligning outputs with desired preferences, leading to higher quality and more relevant generations.
- Full-Merged Weights: This repository provides the full-merged 16-bit weights, eliminating the need for separate adapter loading and simplifying deployment.
Training Details
The model underwent a single epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It utilized a maximum sequence length of 1024 during training. The LoRA configuration (r=8, alpha=16) was merged into the base model, resulting in a standalone, ready-to-use model.
Intended Use Cases
This model is particularly well-suited for applications where high-quality, aligned, and reasoning-focused outputs are critical. Its optimization for Chain-of-Thought makes it valuable for tasks requiring step-by-step logical deduction and structured responses.