toshiyuki-kato/dpo-qwen-cot-merged
The toshiyuki-kato/dpo-qwen-cot-merged model is a 4 billion parameter Qwen3-based instruction-tuned causal language model, fine-tuned by toshiyuki-kato using Direct Preference Optimization (DPO). It is specifically optimized to improve reasoning capabilities through Chain-of-Thought (CoT) and enhance structured response quality. This model is designed for tasks requiring improved logical coherence and adherence to preferred output formats.
Loading preview...
Model Overview
This model, toshiyuki-kato/dpo-qwen-cot-merged, is a 4 billion parameter language model based on the Qwen3-4B-Instruct-2507 architecture. It has been fine-tuned by toshiyuki-kato using Direct Preference Optimization (DPO) via the Unsloth library, with its 16-bit weights fully merged into the base model.
Key Capabilities & Optimization
The primary objective of this DPO fine-tuning was to align the model's responses with preferred outputs, specifically focusing on:
- Improved Reasoning: Enhancing Chain-of-Thought (CoT) capabilities.
- Structured Response Quality: Delivering more coherent and structured outputs based on a preference dataset.
Training Details
The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. The maximum sequence length used during training was 1024 tokens. The training utilized a LoRA configuration (r=8, alpha=16) which has since been merged into the base model, meaning no adapter loading is required for inference.
Usage & Licensing
As a merged model, it can be directly used with the transformers library. The training data, u-10bei/dpo-dataset-qwen-cot, is licensed under the MIT License, and users must also comply with the original base model's license terms.