Overview
arata1/dpo-qwen-cot-e2-b05-1024 is a 4 billion parameter language model derived from Qwen/Qwen3-4B-Instruct-2507. It has been fine-tuned using Direct Preference Optimization (DPO) with the Unsloth library to align its responses with preferred outputs. This model incorporates full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, leading to more logical and step-by-step responses.
- Structured Output Quality: Focuses on generating higher quality and more structured responses based on preference datasets.
- Direct Use: As a merged model, it can be used directly with the
transformers library without additional configuration.
Training Details
The model underwent DPO training for 2 epochs with a learning rate of 1e-07 and a beta value of 0.05. It was trained with a maximum sequence length of 1024 tokens. The training utilized the u-10bei/dpo-dataset-qwen-cot dataset.
Good For
- Applications requiring improved reasoning and logical flow in responses.
- Scenarios where structured and high-quality outputs are critical.
- Developers seeking a readily deployable Qwen3-based model with enhanced instruction following.