arata1/dpo-qwen-cot-e2-b05-1024

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

arata1/dpo-qwen-cot-e2-b05-1024 is a 4 billion parameter Qwen3-based instruction-tuned model, fine-tuned using Direct Preference Optimization (DPO) via Unsloth. It is specifically optimized to enhance reasoning capabilities through Chain-of-Thought (CoT) and improve structured response quality. This model is designed for tasks requiring improved logical coherence and well-formatted outputs.

Loading preview...

Overview

arata1/dpo-qwen-cot-e2-b05-1024 is a 4 billion parameter language model derived from Qwen/Qwen3-4B-Instruct-2507. It has been fine-tuned using Direct Preference Optimization (DPO) with the Unsloth library to align its responses with preferred outputs. This model incorporates full-merged 16-bit weights, eliminating the need for adapter loading.

Key Capabilities

  • Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, leading to more logical and step-by-step responses.
  • Structured Output Quality: Focuses on generating higher quality and more structured responses based on preference datasets.
  • Direct Use: As a merged model, it can be used directly with the transformers library without additional configuration.

Training Details

The model underwent DPO training for 2 epochs with a learning rate of 1e-07 and a beta value of 0.05. It was trained with a maximum sequence length of 1024 tokens. The training utilized the u-10bei/dpo-dataset-qwen-cot dataset.

Good For

  • Applications requiring improved reasoning and logical flow in responses.
  • Scenarios where structured and high-quality outputs are critical.
  • Developers seeking a readily deployable Qwen3-based model with enhanced instruction following.