Qwen/Qwen2-0.5B-Instruct

Warm
Public
0.5B
BF16
32768
License: apache-2.0
Hugging Face
Overview

Qwen2-0.5B-Instruct Overview

Qwen2-0.5B-Instruct is a 0.5 billion parameter instruction-tuned model from the Qwen2 series, developed by Qwen. This model is part of a new generation of Qwen LLMs, built on a Transformer architecture incorporating features like SwiGLU activation, attention QKV bias, and group query attention. It also utilizes an improved tokenizer designed for multiple natural languages and code.

Key Capabilities & Performance

Qwen2-0.5B-Instruct shows significant improvements over its predecessor, Qwen1.5-0.5B-Chat, across various benchmarks. It was pretrained on a large dataset and further refined with supervised finetuning and direct preference optimization. Notable benchmark improvements include:

  • MMLU: 37.9 (vs. 35.0 for Qwen1.5-0.5B-Chat)
  • HumanEval: 17.1 (vs. 9.1 for Qwen1.5-0.5B-Chat)
  • GSM8K: 40.1 (vs. 11.3 for Qwen1.5-0.5B-Chat)
  • C-Eval: 45.2 (vs. 37.2 for Qwen1.5-0.5B-Chat)

Use Cases

This model is suitable for a wide range of instruction-following tasks, leveraging its enhanced capabilities in:

  • Language understanding and generation
  • Multilingual applications
  • Coding assistance
  • Mathematical problem-solving
  • General reasoning tasks

Its compact size (0.5B parameters) combined with a 32K context length makes it efficient for deployment in scenarios requiring a capable yet lightweight model.