Qwen/Qwen2.5-32B-Instruct
Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Sep 17, 2024License:apache-2.0Architecture:Transformer0.3K Open Weights Warm

Qwen/Qwen2.5-32B-Instruct is a 32.5 billion parameter instruction-tuned causal language model developed by Qwen, featuring a transformer architecture with RoPE, SwiGLU, and RMSNorm. This model significantly improves upon Qwen2 with enhanced coding and mathematics capabilities, better instruction following, and robust long-text generation up to 8K tokens within a 128K context window. It excels at understanding structured data like JSON and offers multilingual support for over 29 languages, making it suitable for diverse complex NLP tasks.

Loading preview...

Qwen2.5-32B-Instruct Overview

Qwen2.5-32B-Instruct is an instruction-tuned causal language model from the Qwen2.5 series, developed by Qwen. This 32.5 billion parameter model builds upon its predecessor, Qwen2, with substantial enhancements across several key areas.

Key Capabilities and Improvements

  • Enhanced Domain Expertise: Significantly improved performance in coding and mathematics due to specialized expert model integration.
  • Instruction Following: Demonstrates marked improvements in adhering to instructions and handling diverse system prompts, which benefits role-play and conditional chatbot implementations.
  • Long-Context Support: Features a full context length of 131,072 tokens and can generate outputs up to 8,192 tokens. It utilizes YaRN for efficient long-text processing, though static YaRN in vLLM may impact performance on shorter texts.
  • Structured Data Handling: Excels at understanding and generating structured data, particularly JSON outputs.
  • Multilingual Support: Provides robust support for over 29 languages, including major global languages like Chinese, English, French, Spanish, German, and Japanese.

Architecture and Features

This model employs a transformer architecture incorporating RoPE, SwiGLU, RMSNorm, and Attention QKV bias. It is designed for both pretraining and post-training stages. For detailed evaluation results and performance benchmarks, users can refer to the official Qwen2.5 blog. Deployment with vLLM is recommended for optimal performance, especially when processing long contexts.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p