Qwen/QwQ-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 5, 2025License:apache-2.0Architecture:Transformer2.9K Open Weights Warm

Qwen/QwQ-32B is a 32.5 billion parameter causal language model developed by Qwen, designed specifically for enhanced reasoning capabilities. This model utilizes a transformer architecture with RoPE, SwiGLU, and RMSNorm, and supports an extensive context length of 131,072 tokens. It achieves competitive performance against state-of-the-art reasoning models like DeepSeek-R1 and o1-mini, making it suitable for complex problem-solving and tasks requiring deep logical inference.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p