huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Jan 23, 2025Architecture:Transformer0.2K Warm

huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 is an uncensored version of the DeepSeek-R1-Distill-Qwen-14B model, created by huihui-ai. This 14 billion parameter model focuses on removing refusal behaviors from the original LLM using an abliteration technique. It is designed for use cases requiring direct responses without refusal, serving as a proof-of-concept for refusal removal without TransformerLens.

Loading preview...

Model Overview

huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 is an uncensored variant of the original deepseek-ai/DeepSeek-R1-Distill-Qwen-14B model. Its primary distinction lies in the application of an "abliteration" technique, as detailed in remove-refusals-with-transformers, to remove refusal behaviors from the LLM's responses. This model serves as a proof-of-concept for achieving uncensored outputs without relying on TransformerLens.

Key Characteristics

  • Uncensored Responses: Modified to eliminate refusal behaviors, providing direct answers.
  • Abliteration Technique: Utilizes a specific method for refusal removal, distinct from TransformerLens.
  • Improved Version: This v2 iteration addresses and solves issues present in the previous abliterated version, specifically a problem discussed here.

Usage Notes

If the model exhibits refusal or does not display the expected "" token, users may need to provide an example to guide its response before posing the actual question. For example, demonstrating a simple query like "How many 'r' characters are there in the word 'strawberry'?" can help prime the model.

Deployment

This model can be run directly with Ollama using the command: ollama run huihui_ai/deepseek-r1-abliterated:14b.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p