taka104/qwen3-4b-dpo-qwen-cot-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The taka104/qwen3-4b-dpo-qwen-cot-merged model is a 4 billion parameter instruction-tuned language model based on Qwen/Qwen3-4B-Instruct-2507. It has been fine-tuned using Direct Preference Optimization (DPO) to enhance its reasoning capabilities, specifically Chain-of-Thought (CoT), and improve the quality of structured responses. This model is optimized for tasks requiring logical deduction and well-structured output, making it suitable for applications where coherent and reasoned answers are crucial.

Loading preview...