YOYO-AI/ZYH-LLM-Qwen2.5-14B-V4
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 12, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

YOYO-AI/ZYH-LLM-Qwen2.5-14B-V4 is a 14.8 billion parameter language model developed by YOYO-AI, built upon the Qwen2.5 architecture with a 32768 token context length. This model is a merge of multiple instruction and inference fine-tuned models, specifically designed to enhance calculation accuracy and inference ability while maintaining strong instruction-following and general capabilities. It incorporates a significant proportion of the R1 distillation model to bias towards reasoning tasks.

Loading preview...