Haitao999/Qwen2.5-7B-Base-EMPO-natural_reasoning_all_level
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 22, 2025Architecture:Transformer Cold

Haitao999/Qwen2.5-7B-Base-EMPO-natural_reasoning_all_level is a 7.6 billion parameter language model fine-tuned from Qwen/Qwen2.5-7B. It specializes in natural reasoning tasks, having been trained on the qingyangzhang/natural_reasoning_all_level dataset using the GRPO method. This model is optimized for complex reasoning and problem-solving, leveraging a 131072 token context length.

Loading preview...