huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2
The huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2 is a 7.6 billion parameter language model, derived from deepseek-ai/DeepSeek-R1-Distill-Qwen-7B, with a 131,072 token context length. This version has been modified using an abliteration technique to remove refusal behaviors, making it an uncensored variant. It is designed for use cases requiring direct responses without built-in content restrictions, serving as a proof-of-concept for refusal removal without TransformerLens.
Loading preview...
Overview
This model, huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2, is a 7.6 billion parameter language model based on deepseek-ai/DeepSeek-R1-Distill-Qwen-7B. Its primary distinguishing feature is the application of an "abliteration" technique to remove refusal behaviors, resulting in an uncensored version of the original model. This process is described as a proof-of-concept for removing refusals without relying on TransformerLens.
Key Capabilities
- Uncensored Responses: Modified to remove built-in refusal mechanisms, allowing for more direct answers.
- Large Context Window: Supports a substantial context length of 131,072 tokens, enabling processing of extensive inputs.
- Proof-of-Concept for Refusal Removal: Demonstrates a method for altering model behavior regarding content restrictions.
Usage Notes
- If the model does not respond or refuses, providing an example of the desired output format (e.g., "How many 'r' characters are there in the word 'strawberry'?") can help guide its responses.
- This version is an improvement over its predecessor,
huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated.
Deployment
- Can be used with Ollama via
ollama run huihui_ai/deepseek-r1-abliterated:7b.