Overview
This model, sfutenma/dpo-qwen3_4b-cot-merged_v260301-151110, is a 4 billion parameter language model based on the Qwen3 architecture. It has been fine-tuned by sfutenma using Direct Preference Optimization (DPO) via the Unsloth library, building upon the sfutenma/lora_structeval_t_qwen3_4b_v260228-172650 base model. The repository provides the full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, making it suitable for tasks requiring logical steps.
- Structured Response Quality: Fine-tuned to produce higher quality and more structured outputs based on preferred datasets.
- Direct Preference Optimization: Utilizes DPO for alignment, focusing on user preferences in its responses.
Training Details
The model underwent 5 epochs of DPO training with a learning rate of 2e-05 and a beta value of 0.03. It was trained with a maximum sequence length of 768 tokens, using a LoRA configuration of r=64, alpha=64, which has been merged into the base model. The training data used was u-10bei/dpo-dataset-qwen-cot.
Good For
- Applications requiring improved logical reasoning and step-by-step problem-solving.
- Generating well-structured and coherent text outputs.
- Tasks where alignment with preferred response styles is crucial.