chronobcelp/dpo-qwen-cot-merged
chronobcelp/dpo-qwen-cot-merged is a 4 billion parameter causal language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO). This model is specifically optimized to improve reasoning capabilities through Chain-of-Thought (CoT) and enhance structured response quality. It is designed for applications requiring aligned and coherent outputs, particularly in complex reasoning tasks.
Loading preview...
Model Overview
chronobcelp/dpo-qwen-cot-merged is a 4 billion parameter language model derived from Qwen/Qwen3-4B-Instruct-2507. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, with its 16-bit weights fully merged, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized specifically to improve Chain-of-Thought (CoT) reasoning, making it suitable for tasks requiring multi-step logical deduction.
- Improved Structured Responses: The DPO fine-tuning process focused on aligning the model's outputs with preferred formats and quality, leading to more coherent and structured answers.
- Direct Use: As a fully merged model, it can be used directly with the
transformerslibrary without additional configuration for LoRA adapters.
Training Details
The model was trained for 2 epochs with a learning rate of 5e-06 and a beta value of 0.05, using a maximum sequence length of 1536. The training utilized the u-10bei/dpo-dataset-qwen-cot dataset.
Good For
- Applications requiring strong reasoning and logical inference.
- Generating structured and high-quality text responses.
- Developers looking for a Qwen3-4B variant with improved alignment and CoT capabilities.