YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Feb 23, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

ZYH-LLM-Qwen2.5-14B-V3 is a 14.8 billion parameter language model developed by YOYO-AI, built upon the Qwen2.5 architecture with a notable 131072 token context length. This third-generation model in the ZYH-LLM series utilizes extensive model merging techniques to create a powerful and unified base. It is specifically optimized for instruction following and complex reasoning, achieving the highest IFEval score among 14B models as of February 25, 2025, making it suitable for tasks requiring precise adherence to instructions and advanced problem-solving.

Loading preview...