YiXin-AILab/YiXin-Distill-Qwen-72B
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Mar 13, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

YiXin-Distill-Qwen-72B is a 72.7 billion parameter distilled language model developed by YiXin-AILab, based on the Qwen2.5-72B architecture. Optimized through reinforcement learning, it excels in mathematical reasoning and general knowledge tasks, demonstrating significant performance improvements over comparable distilled models. This model is designed for high-performance applications requiring strong analytical and problem-solving capabilities.

Loading preview...