huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Jan 31, 2025Architecture:Transformer0.1K Warm
The huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2 is a 7.6 billion parameter language model, derived from deepseek-ai/DeepSeek-R1-Distill-Qwen-7B, with a 131,072 token context length. This version has been modified using an abliteration technique to remove refusal behaviors, making it an uncensored variant. It is designed for use cases requiring direct responses without built-in content restrictions, serving as a proof-of-concept for refusal removal without TransformerLens.
Loading preview...