xwen-team/Xwen-7B-Chat
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 31, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Xwen-7B-Chat is a 7.6 billion parameter large language model developed by xwen-team, post-trained from Qwen2.5-7B. It is optimized for chat performance, achieving top-1 rankings among open-sourced models below 10B parameters on benchmarks like Arena-Hard-Auto, AlignBench, and MT-Bench. With a context length of 131072 tokens, it is suitable for conversational AI applications requiring high-quality responses.

Loading preview...