Model Overview
This model, Itohiro2929/dpo-qwen-cot-merged, is a 4 billion parameter language model derived from Qwen/Qwen3-4B-Instruct-2507. It has undergone fine-tuning using Direct Preference Optimization (DPO) via the Unsloth library, focusing on aligning its responses with preferred outputs. The model incorporates full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, leading to more logical and coherent response generation.
- Improved Structured Responses: Fine-tuned to produce higher quality structured outputs based on preference data.
- Direct Preference Optimization (DPO): Leverages DPO to align model behavior with desired response characteristics.
- Ready-to-Use: Provided as a merged model, it can be directly integrated and used with the
transformers library without additional configuration.
Training Details
The model was trained for 1 epoch with a learning rate of 5e-07 and a beta value of 0.1. It utilized a maximum sequence length of 2048 and LoRA configuration (r=8, alpha=16) which has been merged into the base model. The training data used was u-10bei/dpo-dataset-qwen-cot.
Good For
- Applications requiring improved logical reasoning and step-by-step explanations.
- Tasks where structured and high-quality responses are critical.
- Developers seeking a Qwen3-4B variant with enhanced alignment to preferred output styles.