Overview
This model, reiwa7/dpo-qwen-cot-merged, is a 4 billion parameter language model based on the Qwen3-4B-Instruct-2507 architecture. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, resulting in a full-merged 16-bit weight model that requires no adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, making it suitable for complex problem-solving tasks.
- Improved Structured Responses: Aligned to produce higher quality and more structured outputs based on preferred examples.
- Direct Use: As a fully merged model, it can be directly loaded and used with the
transformers library.
Training Details
The model was fine-tuned for 1 epoch with a learning rate of 5e-05 and a beta value of 0.065, using a maximum sequence length of 1024. The training utilized the u-10bei/dpo-dataset-qwen-cot dataset, which focuses on preference alignment for reasoning and structured outputs.
Usage Considerations
This model is ideal for applications where logical reasoning, coherent thought processes, and well-structured answers are critical. Users should be aware that the model's license follows the MIT License, as per the dataset terms, and compliance with the original base model's license (Qwen3-4B-Instruct-2507) is also required.