NaniDAO/deepseek-r1-qwen-2.5-32B-ablated

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 23, 2025License:mitArchitecture:Transformer0.1K Open Weights Warm

NaniDAO/deepseek-r1-qwen-2.5-32B-ablated is a 32.8 billion parameter language model based on the DeepSeek-R1-Distill-Qwen architecture, featuring an extended 131,072 token context length. This model has undergone an ablation technique to reduce refusal behavior, aiming for a more helpful and uncensored reasoning capability. It is designed for users seeking a less restrictive large language model experience for various applications.

Loading preview...

Model Overview

NaniDAO/deepseek-r1-qwen-2.5-32B-ablated is a 32.8 billion parameter language model derived from the DeepSeek-R1-Distill-Qwen-32B architecture. A key differentiator of this model is the application of an ablation technique to modify its refusal mechanisms. This process aims to create a more "helpful" and "uncensored" reasoning model by reducing instances of refusal to valid requests.

Key Characteristics

  • Ablation Technique: Specifically modified to minimize refusal behavior, offering a less censored user experience.
  • Base Architecture: Built upon the robust DeepSeek-R1-Distill-Qwen-32B foundation.
  • Parameter Count: Features 32.8 billion parameters, providing substantial reasoning capabilities.
  • Context Length: Supports an extensive context window of 131,072 tokens, suitable for processing long inputs.

Intended Use Cases

This model is particularly suited for applications where a less restrictive and more direct response from an LLM is desired. Users seeking an uncensored intelligence for various tasks, provided they apply it responsibly and with common sense, may find this model beneficial. It is designed for those who prioritize directness and reduced refusal in their AI interactions.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p