f0rc3ps/Qwen2-7B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Qwen2-7B-Instruct is a 7.6 billion parameter instruction-tuned causal language model developed by Qwen, based on the Transformer architecture. It features SwiGLU activation, attention QKV bias, and group query attention, supporting a context length of up to 131,072 tokens through YARN. This model demonstrates strong performance across various benchmarks, including language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning, making it suitable for diverse general-purpose AI applications.

Loading preview...