mihsato/dpo-qwen-cot-merged-mihsato-v1
The mihsato/dpo-qwen-cot-merged-mihsato-v1 model is a 4 billion parameter Qwen3-based instruction-tuned language model developed by mihsato. It has been fine-tuned using Direct Preference Optimization (DPO) with a focus on improving reasoning through Chain-of-Thought and enhancing structured response quality. This model is optimized for tasks requiring improved logical reasoning and coherent, structured outputs.
Loading preview...
Overview
This model, mihsato/dpo-qwen-cot-merged-mihsato-v1, is a 4 billion parameter language model built upon the Qwen/Qwen3-4B-Instruct-2507 base. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, specifically targeting enhanced reasoning capabilities via Chain-of-Thought (CoT) and improved structured response generation.
Key Capabilities
- Enhanced Reasoning: Optimized to produce better logical reasoning paths (Chain-of-Thought) in its responses.
- Structured Output Quality: Fine-tuned to generate more coherent and structured answers based on preference datasets.
- Direct Preference Optimization (DPO): Utilizes DPO for alignment, focusing on preferred outputs.
- Merged Weights: Provided as a full-merged 16-bit model, eliminating the need for adapter loading.
Training Details
The model was trained for 1 epoch with a learning rate of 5e-07 and a beta value of 0.1. It used a maximum sequence length of 1024 during training. The LoRA configuration (r=8, alpha=16) was merged into the base model. The training data used for DPO was u-10bei/dpo-dataset-qwen-cot.
Usage
This model can be directly used with the transformers library, similar to other pre-trained models, without requiring special adapter handling. It is suitable for applications where improved reasoning and structured, high-quality responses are critical.