Qwen/Qwen2.5-0.5B-Instruct
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Sep 16, 2024License:apache-2.0Architecture:Transformer0.5K Open Weights Warm

Qwen2.5-0.5B-Instruct is a 0.49 billion parameter instruction-tuned causal language model developed by Qwen, featuring a 32,768 token context length. This model significantly improves upon its predecessor with enhanced knowledge, particularly in coding and mathematics, and better instruction following, long text generation, and structured data understanding, including JSON output. It is designed for multilingual applications, supporting over 29 languages, making it suitable for diverse global use cases requiring robust instruction adherence and structured output from a compact model.

Loading preview...

Qwen2.5-0.5B-Instruct Overview

Qwen2.5-0.5B-Instruct is a compact, instruction-tuned causal language model from the Qwen2.5 series, developed by Qwen. This model, with 0.49 billion parameters and a 32,768 token context length, represents a significant advancement over Qwen2, offering enhanced capabilities despite its smaller size.

Key Capabilities

  • Expanded Knowledge & Skills: Significantly improved performance in coding and mathematics due to specialized expert model integration.
  • Enhanced Instruction Following: More robust instruction adherence and better handling of diverse system prompts, improving role-play and chatbot condition-setting.
  • Advanced Text Generation: Improved ability to generate long texts (over 8K tokens) and understand structured data like tables, with a focus on generating structured outputs, especially JSON.
  • Multilingual Support: Supports over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic.
  • Long-Context Processing: Capable of processing contexts up to 128K tokens and generating up to 8K tokens.

Good for

  • Applications requiring a compact model with strong coding and mathematical reasoning.
  • Chatbots and assistants needing resilient instruction following and role-play capabilities.
  • Tasks involving long text generation and the creation of structured outputs (e.g., JSON).
  • Multilingual applications targeting a broad range of global languages.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p