Model Overview
This model, kedumerikugame/dpo-qwen-cot-merged, is a 4 billion parameter language model derived from Qwen/Qwen3-4B-Instruct-2507. It has been fine-tuned using Direct Preference Optimization (DPO), a method for aligning language models with human preferences, implemented through the Unsloth library. The repository provides the full, merged 16-bit weights, eliminating the need for separate adapter loading.
Training Details
The model underwent a single epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It was configured with a maximum sequence length of 1024 tokens. The LoRA configuration used for fine-tuning involved parameters r=8 and alpha=16, which have been merged into the base model.
Key Capabilities
- Preference Alignment: Enhanced response quality and alignment with desired outputs through DPO training.
- Efficient Deployment: Full-merged weights simplify deployment without requiring adapter management.
- Conversational AI: Suitable for chat-based applications and instruction following, building on the capabilities of its Qwen3-Instruct base.
Use Cases
This model is particularly well-suited for applications requiring:
- Generating high-quality, preference-aligned text in conversational contexts.
- Instruction-following tasks where nuanced responses are beneficial.
- Scenarios where a compact, fully merged model is preferred for ease of use and inference.