mutsumutsu/dpo-qwen-cot-merged
The mutsumutsu/dpo-qwen-cot-merged model is a 4 billion parameter language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO) with the Unsloth library. It is specifically optimized to improve reasoning capabilities through Chain-of-Thought (CoT) and enhance structured response quality. This model provides full-merged 16-bit weights, making it suitable for applications requiring improved logical coherence and structured output.
Loading preview...
Model Overview
This model, mutsumutsu/dpo-qwen-cot-merged, is a 4 billion parameter language model derived from the Qwen/Qwen3-4B-Instruct-2507 base model. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, with its 16-bit weights fully merged for direct use without adapter loading.
Key Capabilities & Optimization
- Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning abilities.
- Structured Response Quality: Fine-tuned to produce higher quality and more structured outputs.
- DPO Training: Utilizes DPO with a specific preference dataset (
u-10bei/dpo-dataset-qwen-cot) to align responses with preferred examples. - Training Configuration: Trained for 1 epoch with a learning rate of 1e-07 and a beta value of 0.1, using a maximum sequence length of 1024.
Intended Use Cases
This model is particularly well-suited for applications where:
- Logical Reasoning is Crucial: Tasks requiring step-by-step thinking or complex problem-solving.
- Structured Output is Preferred: Generating responses that adhere to specific formats or logical structures.
- Efficiency is Key: As a 4B parameter model, it offers a balance between performance and computational resource requirements.
Users should adhere to the MIT License of the training data and the original base model's license terms.