Qwen/Qwen2-72B-Instruct

Warm
Public
72.7B
FP8
131072
License: other
Hugging Face
Overview

Overview

Qwen2-72B-Instruct is a 72.7 billion parameter instruction-tuned model from the Qwen2 series, developed by Qwen. It is based on the Transformer architecture, incorporating features like SwiGLU activation, attention QKV bias, and group query attention. The model also utilizes an improved tokenizer designed for multiple natural languages and code. It has been post-trained using both supervised finetuning and direct preference optimization.

Key Capabilities & Performance

Qwen2-72B-Instruct demonstrates competitive performance against other state-of-the-art open-source models and proprietary models across various benchmarks. Notable benchmark results include:

  • MMLU: 82.3
  • HumanEval (Coding): 86.0
  • MATH (Mathematics): 59.7
  • C-Eval (Chinese): 83.8

Extended Context Length

This model supports an impressive context length of up to 131,072 tokens. For processing long texts, it leverages YARN (Yet Another RoPE extendeR) for length extrapolation. Users can enable this long-context capability by configuring rope_scaling in the config.json file, particularly when deploying with vLLM. It's important to note that vLLM's current static YARN implementation might impact performance on shorter texts if enabled.

Good for

  • General language understanding and generation tasks.
  • Multilingual applications, especially with strong Chinese language support.
  • Coding tasks, demonstrating high scores on HumanEval and MultiPL-E.
  • Complex reasoning and mathematical problem-solving.
  • Applications requiring processing of very long input texts (up to 131K tokens).