maldv/QwentileLambda2.5-32B-Instruct
Qwentile Lambda 2.5 32B Instruct by Praxis Maldevide is a 32.8 billion parameter instruction-tuned model with a 131,072 token context length. This model is a normalized denoised fourier interpolation of several Qwen 2.5-32B based models, including a significant contribution from Nvidia's OpenCodeReasoning-Nemotron-32B. It is designed to exhibit superior reasoning and code abilities, blending advanced thought processes with creative output for complex tasks.
Loading preview...
Qwentile Λ 2.5 32B Instruct Overview
Qwentile Λ 2.5 32B Instruct, developed by Praxis Maldevide, is a 32.8 billion parameter instruction-tuned model built upon the Qwen 2.5-32B architecture. It stands out due to its unique creation method: a "normalized denoised fourier interpolation" of multiple high-performing models. This process involves warping and interpolating various models in signal space and then integrating them back onto a Qwentile base, specifically incorporating the Nemotron OpenCodeReasoning input layer.
Key Capabilities
- Advanced Reasoning: The model is engineered to demonstrate superior "thinking skills" by blending the strengths of its constituent models, including those focused on advanced reasoning.
- Code Ability: With the integration of
nvidia/OpenCodeReasoning-Nemotron-32B, this model is expected to possess significant code generation and reasoning capabilities. - Blended Output: Unlike models that strictly separate thought processes from creative output, Qwentile Lambda 2.5 32B Instruct aims to seamlessly combine these, leading to powerful and integrated responses.
What Makes It Different?
This model is the latest in a series of Qwen 2.5 merges by its creator, leveraging recent advancements in other models. Its unique interpolation technique allows it to synthesize the strengths of diverse models like a-m-team/AM-Thinking-v1, nvidia/OpenCodeReasoning-Nemotron-32B, maldv/Loqwqtus2.5-32B-Instruct, trashpanda-org/QwQ-32B-Snowdrop-v0, and ArliAI/QwQ-32B-ArliAI-RpR-v3. This approach aims to surpass previous iterations in advanced reasoning and introduce robust coding proficiency.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.