Sigsaghze76/Qwen2.5-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2.5-7B-Instruct is a 7.61 billion parameter instruction-tuned causal language model developed by Qwen, based on the Qwen2.5 architecture. It features significant improvements in coding, mathematics, instruction following, and long text generation up to 8K tokens, with a full context length of 131,072 tokens. This model is designed for robust performance across diverse tasks, including structured data understanding and multilingual support for over 29 languages.

Loading preview...