jdad334/Qwen2-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2-7B-Instruct is a 7.6 billion parameter instruction-tuned causal language model from the Qwen2 series, developed by Qwen. It is built on a Transformer architecture with SwiGLU activation and group query attention, supporting a context length of up to 131,072 tokens through YARN. This model demonstrates strong performance across language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning benchmarks, making it suitable for a wide range of general-purpose AI applications.

Loading preview...