huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jan 22, 2025Architecture:Transformer0.2K Warm

huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated is a 32.8 billion parameter language model, derived from deepseek-ai/DeepSeek-R1-Distill-Qwen-32B. This model has been modified using an 'abliteration' technique to specifically remove refusal behaviors, making it an uncensored version. It serves as a proof-of-concept for removing LLM refusals without TransformerLens, primarily aimed at use cases requiring direct, unfiltered responses.

Loading preview...

Model Overview

huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated is a 32.8 billion parameter language model based on the deepseek-ai/DeepSeek-R1-Distill-Qwen-32B architecture. Its primary distinction lies in its uncensored nature, achieved through a process called "abliteration." This technique aims to remove refusal behaviors from the model's responses.

Key Characteristics

  • Abliteration Technique: This model is a proof-of-concept demonstrating the removal of refusal mechanisms from an LLM without relying on TransformerLens. This makes it suitable for tasks where direct, unfiltered responses are preferred.
  • Base Model: Built upon the DeepSeek-R1-Distill-Qwen-32B model, inheriting its foundational capabilities.
  • Refusal Handling: Users might need to provide an initial example to guide the model if it exhibits refusal or does not produce the expected <think> token, as noted in the original documentation.

Use Cases

This model is particularly suited for:

  • Research into LLM censorship and refusal mechanisms: Provides a modified base for studying how refusal behaviors can be altered or removed.
  • Applications requiring unfiltered content generation: For developers and researchers who need a model that does not inherently refuse certain prompts based on ethical or safety guidelines embedded in its training.
  • Exploration of abliteration techniques: A practical example for those interested in applying or understanding the remove-refusals-with-transformers methodology.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p