reiwa7/dpo-qwen-cot-merged

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 3, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The reiwa7/dpo-qwen-cot-merged model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507, optimized using Direct Preference Optimization (DPO) via Unsloth. This 4 billion parameter model is specifically aligned to improve reasoning capabilities through Chain-of-Thought (CoT) and enhance structured response quality. It is designed for tasks requiring improved logical coherence and adherence to preferred output formats.

Loading preview...

Overview

This model, reiwa7/dpo-qwen-cot-merged, is a 4 billion parameter language model based on the Qwen3-4B-Instruct-2507 architecture. It has undergone Direct Preference Optimization (DPO) using the Unsloth library, resulting in a full-merged 16-bit weight model that requires no adapter loading.

Key Capabilities

  • Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, making it suitable for complex problem-solving tasks.
  • Improved Structured Responses: Aligned to produce higher quality and more structured outputs based on preferred examples.
  • Direct Use: As a fully merged model, it can be directly loaded and used with the transformers library.

Training Details

The model was fine-tuned for 1 epoch with a learning rate of 5e-05 and a beta value of 0.065, using a maximum sequence length of 1024. The training utilized the u-10bei/dpo-dataset-qwen-cot dataset, which focuses on preference alignment for reasoning and structured outputs.

Usage Considerations

This model is ideal for applications where logical reasoning, coherent thought processes, and well-structured answers are critical. Users should be aware that the model's license follows the MIT License, as per the dataset terms, and compliance with the original base model's license (Qwen3-4B-Instruct-2507) is also required.