Hi-Satoh/adv_sft_dpo_final_6_merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 28, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

Hi-Satoh/adv_sft_dpo_final_6_merged is a 4 billion parameter language model, fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO). This model specializes in improving reasoning (Chain-of-Thought) and structured response quality. It is optimized for aligning responses with preferred outputs based on its training dataset.

Loading preview...