unsloth/QwQ-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Mar 5, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

QwQ-32B is a 32.5 billion parameter causal language model from Qwen, part of the Qwen series, specifically designed for enhanced reasoning capabilities. It utilizes a transformer architecture with RoPE, SwiGLU, RMSNorm, and Attention QKV bias, and supports a full context length of 131,072 tokens with YaRN for long inputs. This model is optimized for complex problem-solving and achieves competitive performance against other state-of-the-art reasoning models.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p