daichira/dpo-qwen-cot-merged-r8
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The daichira/dpo-qwen-cot-merged-r8 model is a 4 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO) via Unsloth. It is specifically optimized to enhance reasoning capabilities through Chain-of-Thought (CoT) and improve the quality of structured responses. This model provides full-merged 16-bit weights, eliminating the need for adapter loading, and is primarily suited for applications requiring improved logical reasoning and coherent, structured output.

Loading preview...

Overview

daichira/dpo-qwen-cot-merged-r8 is a 4 billion parameter language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507. It leverages Direct Preference Optimization (DPO) with the Unsloth library to enhance its response quality. This model comes with full-merged 16-bit weights, meaning no separate adapter loading is required for deployment.

Key Capabilities

  • Enhanced Reasoning: Optimized to improve Chain-of-Thought (CoT) reasoning, leading to more logical and step-by-step problem-solving.
  • Structured Response Quality: Focuses on generating higher quality, more coherent, and structured outputs based on preferred examples.
  • Direct Use: As a merged model, it can be used directly with the Hugging Face transformers library without additional configuration for LoRA adapters.

Good for

  • Applications requiring improved logical reasoning and multi-step problem-solving.
  • Scenarios where structured and high-quality text generation is critical.
  • Developers looking for a DPO-optimized Qwen3-4B variant that is ready for immediate inference without adapter management.