Qwen/QwQ-32B-Preview
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Nov 27, 2024License:apache-2.0Architecture:Transformer1.7K Open Weights Warm
QwQ-32B-Preview is an experimental 32.5 billion parameter causal language model developed by the Qwen Team, featuring a transformer architecture with RoPE, SwiGLU, and RMSNorm. This model is specifically focused on advancing AI reasoning capabilities, particularly excelling in mathematical and coding tasks. It supports a substantial context length of 32,768 tokens, making it suitable for complex analytical problems.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
–
frequency_penalty
presence_penalty
repetition_penalty
–
min_p
–