Hi-Satoh/adv_MoE_ALF_sft3_merged
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 24, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
Hi-Satoh/adv_MoE_ALF_sft3_merged is a 4 billion parameter language model fine-tuned from Qwen/Qwen3-4B-Instruct-2507. Utilizing Direct Preference Optimization (DPO) via Unsloth, this model is specifically optimized to enhance reasoning capabilities, particularly Chain-of-Thought, and improve the quality of structured responses. It is designed for applications requiring aligned outputs based on preferred response patterns.
Loading preview...