Koba-8Tarku/dpo-qwen-cot-merged
Koba-8Tarku/dpo-qwen-cot-merged is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO). This model is specifically optimized to enhance reasoning capabilities, particularly Chain-of-Thought (CoT), and improve the quality of structured responses. It is designed for tasks requiring improved logical coherence and adherence to preferred output formats.
Loading preview...
Model Overview
Koba-8Tarku/dpo-qwen-cot-merged is a 4 billion parameter language model developed by Koba-8Tarku. It is a fine-tuned version of the Qwen/Qwen3-4B-Instruct-2507 base model, utilizing Direct Preference Optimization (DPO) via the Unsloth library. This model incorporates full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities & Optimization
This model has been specifically optimized through DPO to align its responses with preferred outputs. Its primary focus is on enhancing:
- Reasoning (Chain-of-Thought): Improved ability to generate logical, step-by-step reasoning processes.
- Structured Response Quality: Better adherence to desired output formats and overall response coherence.
Training Details
The DPO training was conducted for 1 epoch with a learning rate of 1e-07 and a beta value of 0.1. The maximum sequence length used during training was 1024 tokens. The LoRA configuration (r=8, alpha=16) was merged directly into the base model.
Use Cases
This model is particularly well-suited for applications where:
- Enhanced reasoning is critical, such as complex problem-solving or analytical tasks.
- Structured and high-quality outputs are required, ensuring responses are well-formatted and aligned with specific preferences.
Licensing
The model operates under the MIT License, as per the terms of its training dataset (u-10bei/dpo-dataset-qwen-cot). Users must also comply with the original base model's license terms.