qingy2024/Qwen2.5-Math-14B-Instruct-Preview
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Dec 1, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The qingy2024/Qwen2.5-Math-14B-Instruct-Preview is a 14.8 billion parameter instruction-tuned language model developed by qingy2019, fine-tuned from unsloth/qwen2.5-14b-instruct-bnb-4bit. This model was specifically optimized for mathematical reasoning and general instruction following, leveraging the Unsloth framework for faster training. It demonstrates capabilities in complex reasoning tasks, as indicated by its performance on benchmarks like MATH Lvl 5 and BBH.

Loading preview...