ortiz-ai/dpo-qwen-cot-merged
The ortiz-ai/dpo-qwen-cot-merged model is a 4 billion parameter Qwen3-based causal language model fine-tuned using Direct Preference Optimization (DPO) by ortiz-ai. It is specifically optimized to enhance reasoning capabilities, particularly Chain-of-Thought (CoT), and improve the quality of structured responses. This model is designed for applications requiring robust logical inference and well-formatted outputs, leveraging its 40960 token context length.
Loading preview...
Overview
This model, ortiz-ai/dpo-qwen-cot-merged, is a 4 billion parameter language model built upon the Qwen3-4B-Instruct-2507 base. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, specifically targeting improvements in reasoning and structured response generation. The model's 16-bit weights are fully merged, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning (Chain-of-Thought): Optimized to produce more coherent and logical reasoning steps in its responses.
- Improved Structured Output: Fine-tuned to generate higher quality and better-structured answers based on preferred outputs.
- Direct Use: As a fully merged model, it can be directly loaded and used with the
transformerslibrary without additional configuration for LoRA adapters.
Training Details
The model was trained for 1 epoch with a learning rate of 1e-07 and a beta value of 0.1, using a maximum sequence length of 1024. The training utilized the u-10bei/dpo-dataset-qwen-cot dataset. It maintains a substantial context length of 40960 tokens, inherited from its base architecture.
Good For
- Applications requiring strong logical reasoning.
- Tasks where structured and well-formatted responses are critical.
- Developers seeking a Qwen3-based model with enhanced CoT capabilities.