qingy2024/Qwen2.5-Math-14B-Instruct-Pro
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Dec 3, 2024Architecture:Transformer Cold

qingy2024/Qwen2.5-Math-14B-Instruct-Pro is a 14.8 billion parameter instruction-tuned language model, merged using the TIES method from Qwen/Qwen2.5-14B-Instruct and qingy2019/Qwen2.5-Math-14B-Instruct-Alpha. This model is specifically optimized for mathematical reasoning and problem-solving tasks, leveraging its base models to enhance its capabilities in this domain. It supports a context length of 131072 tokens and is designed for applications requiring strong mathematical instruction following across multiple languages.

Loading preview...