deepseek-ai/DeepSeek-V3.2-Speciale
TEXT GENERATIONConcurrency Cost:4Model Size:685BQuant:FP8Ctx Length:32kPublished:Nov 28, 2025License:mitArchitecture:Transformer0.7K Open Weights Warm
DeepSeek-V3.2-Speciale is a 685 billion parameter language model developed by DeepSeek-AI, featuring a 32768 token context length. It utilizes DeepSeek Sparse Attention (DSA) for computational efficiency in long contexts and a scalable reinforcement learning framework. This high-compute variant is specifically optimized for deep reasoning tasks and agentic AI, demonstrating proficiency comparable to or surpassing models like GPT-5 and Gemini-3.0-Pro in complex problem-solving scenarios, including mathematical and informatics olympiads.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–