RedHatAI/Qwen2.5-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:May 9, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

RedHatAI/Qwen2.5-7B-Instruct is a 7.61 billion parameter instruction-tuned causal language model developed by Qwen, based on the Qwen2.5 series. It features a transformer architecture with RoPE, SwiGLU, and RMSNorm, supporting a full context length of 131,072 tokens. This model significantly improves capabilities in coding, mathematics, instruction following, long text generation, and structured output (especially JSON), with multilingual support for over 29 languages.

Loading preview...