YOYO-AI/ZYH-LLM-Qwen2.5-14B

Warm
Public
14.8B
FP8
131072
License: apache-2.0
Hugging Face
Overview

ZYH-LLM-Qwen2.5-14B: An Upgraded Merged Model

The ZYH-LLM-Qwen2.5-14B is a 14.8 billion parameter language model developed by YOYO-AI, released on February 5, 2025. This model represents a new series from YOYO-AI, built upon the Qwen2.5 architecture and designed to offer enhanced performance over previous merged models.

Key Capabilities & Development

  • Advanced Merging Techniques: The model was created using 'della' and 'sce' merging methods, indicating a sophisticated approach to combining different model strengths.
  • Comprehensive Base Models: It integrates several Qwen2.5-14B variants, including:
    • Qwen2.5-Coder-14B
    • Qwen2.5-Coder-14B-instruct
    • Qwen2.5-14B-instruct
    • Qwen2.5-14B-instruct-1M
    • Qwen2.5-14B
      This combination suggests a broad capability set, potentially excelling in both general instruction following and specialized coding tasks.
  • High Performance Focus: YOYO-AI emphasizes that this model's performance is "absolutely phenomenal," surpassing their previously released merged models.

Future Availability

A GGUF format version of ZYH-LLM-Qwen2.5-14B is anticipated to be released soon, which will facilitate its use on consumer hardware.