eridon-pro/dpo-qwen-cot-merged-from-sft-adapter-38-1
The eridon-pro/dpo-qwen-cot-merged-from-sft-adapter-38-1 model is a 4 billion parameter Qwen3-based instruction-tuned language model, fine-tuned using Direct Preference Optimization (DPO) by eridon-pro. It specializes in improving reasoning capabilities through Chain-of-Thought (CoT) and enhancing structured response quality. This model is optimized for tasks requiring logical deduction and coherent, well-structured outputs, building upon its Qwen3-4B-Instruct-2507 base.
Loading preview...
Model Overview
eridon-pro/dpo-qwen-cot-merged-from-sft-adapter-38-1 is a 4 billion parameter language model based on the Qwen3-4B-Instruct-2507 architecture. It has been further fine-tuned by eridon-pro using Direct Preference Optimization (DPO) via the Unsloth library. This model incorporates full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized through DPO to improve Chain-of-Thought (CoT) reasoning, making it suitable for tasks requiring logical steps and deductions.
- Structured Response Quality: Specifically aligned to produce higher quality, more structured outputs based on preference datasets.
- Direct Use: As a fully merged model, it can be directly integrated and used with the
transformerslibrary without additional configuration for LoRA adapters.
Training Details
The model was trained for 2 epochs with a learning rate of 1e-06 and a beta value of 0.1. It utilized a maximum sequence length of 1024 during training. The DPO process leveraged the u-10bei/dpo-dataset-qwen-cot dataset for preference alignment. The base model was an SFT merged version from eridon-pro/lora_structeval_t_qwen3_4b-38.
Licensing
This model operates under the MIT License, consistent with its training data. Users must also adhere to the license terms of the original base model, Qwen3-4B-Instruct-2507.