Model Overview
This model, hgoto666/dpo-qwen-cot-merged, is a 4 billion parameter language model derived from hgoto666/unsloth-qwen3-4b-structured-output-lora-3-mix-strategy-ver3. It has been further optimized using Direct Preference Optimization (DPO) via the Unsloth library, with its 16-bit weights fully merged for direct use with transformers.
Key Capabilities
- Enhanced Reasoning: Optimized through DPO to improve Chain-of-Thought (CoT) reasoning, making it suitable for complex problem-solving tasks.
- Improved Structured Output: Specifically fine-tuned to produce higher quality structured responses, aligning with preferred output formats.
- Efficient Fine-tuning: Leverages the Unsloth library for DPO training, indicating an efficient fine-tuning process.
- Direct Usage: As a fully merged model, it requires no adapter loading, simplifying deployment.
Training Details
The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It was trained with a maximum sequence length of 1024 tokens, utilizing a LoRA configuration of r=8, alpha=16 which has since been merged into the base model. The training data used for DPO was u-10bei/dpo-dataset-qwen-cot.
Ideal Use Cases
This model is particularly well-suited for applications where:
- Logical Reasoning is Critical: Tasks that benefit from explicit step-by-step reasoning.
- Structured Data Generation is Required: Scenarios demanding well-formatted and consistent outputs.
- Preference Alignment is Important: Use cases where model responses need to closely match human preferences for quality and structure.