CalamitousFelicitousness/Qwen2.5-32B-Instruct-fp8-dynamic
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Sep 18, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Qwen2.5-32B-Instruct is a 32.5 billion parameter instruction-tuned causal language model developed by Qwen, based on the Qwen2 architecture. It features significant improvements in coding, mathematics, instruction following, and long text generation up to 8K tokens, with a full context length of 131,072 tokens. This model is designed for robust performance across diverse tasks, including structured data understanding and multilingual support for over 29 languages.

Loading preview...