xwen-team/Xwen-72B-Chat
TEXT GENERATIONConcurrency Cost:4Model Size:72.7BQuant:FP8Ctx Length:32kPublished:Jan 31, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
Xwen-72B-Chat is a 72 billion parameter open-source large language model developed by xwen-team, post-trained from Qwen2.5 models. It demonstrates top-tier chat performance among open-sourced models under 100B parameters, excelling in benchmarks like Arena-Hard-Auto, AlignBench, and MT-Bench. This model is optimized for general-purpose conversational AI applications, offering strong performance for various chat-based tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
min_p
–