YOYO-AI/Qwen2.5-14B-YOYO-V4-p2
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Mar 1, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

YOYO-AI/Qwen2.5-14B-YOYO-V4-p2 is a 14.8 billion parameter preview model from the fourth generation of the Qwen-YOYO series, developed by YOYO-AI. This model, with a 32768 token context length, is part of a series exploring distinct merging methodologies to identify the best-performing variant. It is designed as a precursor to a larger official release that will support a 1 million-token context length, focusing on advanced language understanding and generation.

Loading preview...