a2cokubo/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 7, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The a2cokubo/dpo-qwen-cot-merged model is a 4 billion parameter Qwen3-based language model, fine-tuned using Direct Preference Optimization (DPO) with Unsloth. It is specifically optimized to enhance reasoning capabilities, particularly Chain-of-Thought (CoT), and improve the quality of structured responses. This model is designed for applications requiring robust logical inference and well-formed outputs.

Loading preview...

Model Overview

The a2cokubo/dpo-qwen-cot-merged model is a 4 billion parameter language model built upon the Qwen3-4B-Instruct-2507 base. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, specifically targeting improvements in reasoning and structured response generation.

Key Capabilities

  • Enhanced Reasoning: Optimized for Chain-of-Thought (CoT) reasoning, making it suitable for tasks requiring multi-step logical deduction.
  • Improved Response Quality: Fine-tuned to produce higher quality and more structured outputs based on preferred examples.
  • Direct Use: Provided as a full-merged 16-bit model, eliminating the need for adapter loading and allowing direct integration with the transformers library.

Training Details

The model was trained for 1 epoch with a learning rate of 1e-05 and a beta value of 0.3, using a maximum sequence length of 1024. The DPO process leveraged a specific preference dataset (u-10bei/dpo-dataset-qwen-cot) to guide its optimization.

Good For

  • Applications requiring strong logical reasoning and problem-solving.
  • Generating structured and coherent text outputs.
  • Developers seeking a Qwen3-based model with improved inference capabilities out-of-the-box.