Tongyi-Zhiwen/QwenLong-L1-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:May 23, 2025License:apache-2.0Architecture:Transformer0.2K Open Weights Warm

QwenLong-L1-32B is a 32 billion parameter long-context large reasoning model developed by Tongyi Lab, Alibaba Group. It is the first long-context LRM trained with reinforcement learning (RL) for enhanced long-context reasoning capabilities. The model excels in document question answering (DocQA) benchmarks, outperforming other flagship LRMs and achieving performance comparable to Claude-3.7-Sonnet-Thinking. It is optimized for robust long-context generalization across mathematical, logical, and multi-hop reasoning tasks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p