liubinemail/Qwen2.5-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2.5-7B-Instruct is a 7.61 billion parameter instruction-tuned causal language model developed by Qwen, part of the Qwen2.5 series. It features significant improvements in coding, mathematics, instruction following, and long text generation up to 8K tokens, with a full context length of 131,072 tokens. This model also enhances structured data understanding, JSON output generation, and multilingual support for over 29 languages, making it suitable for diverse conversational AI and complex task execution.

Loading preview...