jinkami07/dpo-qwen3-4b-r8-lr1e6-beta005-ep2-merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 18, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The jinkami07/dpo-qwen3-4b-r8-lr1e6-beta005-ep2-merged model is a 4 billion parameter Qwen3-based instruction-tuned language model, fine-tuned using Direct Preference Optimization (DPO) by jinkami07. It is specifically optimized to improve reasoning capabilities, particularly Chain-of-Thought, and enhance structured response quality. This model excels at generating aligned and coherent outputs for complex prompts, making it suitable for tasks requiring logical progression and structured answers.

Loading preview...