motobrew/utokyo-llm-comp-dpo-v2
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The motobrew/utokyo-llm-comp-dpo-v2 is a 4 billion parameter Qwen3-based instruction-tuned causal language model, fine-tuned by motobrew using Direct Preference Optimization (DPO). It is specifically optimized to improve reasoning capabilities through Chain-of-Thought and enhance structured response quality. This model is designed for tasks requiring aligned and high-quality outputs based on preferred examples.

Loading preview...

Model Overview

The motobrew/utokyo-llm-comp-dpo-v2 is a 4 billion parameter language model developed by motobrew, based on the unsloth/Qwen3-4B-Instruct-2507 architecture. It has been fine-tuned using Direct Preference Optimization (DPO) via the Unsloth library, with its 16-bit weights fully merged for direct use without adapter loading.

Key Capabilities

  • Enhanced Reasoning: Optimized to improve Chain-of-Thought reasoning, leading to more logical and structured outputs.
  • Improved Response Quality: Fine-tuned to align responses with preferred examples, enhancing overall output quality.
  • Structured Output Generation: Focuses on generating well-structured responses based on the training objective.

Training Details

The model underwent 1 epoch of DPO training with a learning rate of 1e-07 and a beta value of 0.1. It utilized a maximum sequence length of 1024 and was trained on the u-10bei/dpo-dataset-qwen-cot dataset. The LoRA configuration (r=8, alpha=16) was merged into the base model.

Usage

This model can be directly integrated into applications using the transformers library, requiring no additional adapter loading. It supports a context length of 40960 tokens, making it suitable for tasks requiring extensive context processing.