arata1/dpo-qwen-cot-merged-0211-b05
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 16, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The arata1/dpo-qwen-cot-merged-0211-b05 is a 4 billion parameter Qwen3 model developed by arata1, fine-tuned using Unsloth and Huggingface's TRL library. This model was trained for enhanced performance, leveraging Unsloth for 2x faster training. It is designed for general instruction-following tasks, building upon the Qwen3 architecture with a 32768 token context length.

Loading preview...