huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 29, 2025Architecture:Transformer0.0K Warm

huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated is a 7.6 billion parameter language model based on the DeepSeek-R1-Distill-Qwen-7B architecture. This model has been modified using an 'abliteration' technique to remove refusal behaviors, aiming to provide an uncensored output. It is primarily designed for use cases requiring a less restrictive language model, though it may exhibit minor encoding issues and slightly reduced performance compared to its base model.

Loading preview...

Model Overview

huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated is a 7.6 billion parameter language model derived from the deepseek-ai/DeepSeek-R1-Distill-Qwen-7B base model. Its primary distinction lies in the application of an "abliteration" technique, detailed in the remove-refusals-with-transformers project, to remove inherent refusal mechanisms. This process aims to create an uncensored version of the original model.

Key Characteristics

  • Uncensored Output: Modified to remove refusal behaviors, allowing for broader response generation.
  • Base Model: Built upon the DeepSeek-R1-Distill-Qwen-7B architecture.
  • Parameter Count: Features 7.6 billion parameters.
  • Context Length: Supports a context length of 131,072 tokens.

Important Considerations

  • Performance: The 7B abliterated model may exhibit slightly reduced performance compared to its original counterpart.
  • Encoding Issues: Users might encounter occasional issues with character encoding.

Usage

This model can be directly used with Ollama via ollama run huihui_ai/deepseek-r1-abliterated:7b.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p