Model Overview
hiro7ka/dpo-qwen-cot-merged-ver3 is a 4 billion parameter language model built upon the Qwen3-4B-Instruct-2507 base model. It has been fine-tuned using Direct Preference Optimization (DPO), leveraging the Unsloth library to enhance its performance. This model comes with full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities & Optimization
This model's primary optimization objective was to align its responses with preferred outputs, focusing on:
- Improved Reasoning (Chain-of-Thought): The DPO training specifically targeted enhancing the model's ability to generate logical, step-by-step reasoning processes.
- Structured Response Quality: It is designed to produce higher quality and more coherent structured outputs based on preference datasets.
Training Details
The DPO training involved 0.5 epochs with a learning rate of 1e-07 and a beta value of 0.15. The maximum sequence length used during training was 1024 tokens. The LoRA configuration (r=8, alpha=16) was merged directly into the base model.
Usage & Licensing
As a merged model, it can be directly used with the transformers library. The training data utilized was u-10bei/dpo-dataset-qwen-cot, and the model is released under the MIT License, with users also required to comply with the original base model's license terms.