huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 22, 2025Architecture:Transformer0.1K Warm

The huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated is an 8 billion parameter language model, derived from deepseek-ai/DeepSeek-R1-Distill-Llama-8B, with a 32768 token context length. This model has been modified using 'abliteration' techniques to remove refusal behaviors, making it an uncensored variant. It is primarily designed for use cases requiring direct responses without content filtering, serving as a proof-of-concept for refusal removal without TransformerLens.

Loading preview...

Model Overview

The huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated is an 8 billion parameter language model based on the deepseek-ai/DeepSeek-R1-Distill-Llama-8B architecture. Its primary distinguishing feature is the application of "abliteration" techniques, a proof-of-concept method to remove refusal behaviors and content filtering from the original model without relying on TransformerLens.

Key Characteristics

  • Uncensored Output: Modified to bypass typical refusal mechanisms, providing direct responses.
  • Abliteration Method: Utilizes a novel approach for refusal removal, detailed in the remove-refusals-with-transformers project.
  • Llama-based Architecture: Inherits the foundational capabilities of the Llama model family.
  • Context Length: Supports a substantial context window of 32768 tokens.

Usage Notes

Users may need to provide an example to guide the model if it initially refuses to respond or if specific formatting like "" is expected. For instance, providing a simple question-answer pair can help establish the desired interaction pattern.

Deployment

This model is available for use with Ollama under the name huihui_ai/deepseek-r1-abliterated:8b.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p