richardyoung/DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold

The richardyoung/DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus is a 7.6 billion parameter language model, derived from DeepSeek-R1-Distill-Qwen-7B, that has undergone an 'abliteration' process to remove refusal behaviors. Developed by Richard Young using the OBLITERATUS (advanced) method, this model is specifically engineered to be uncensored. It is intended for research into LLM safety guardrails and the effects of refusal behavior removal, offering a 32768 token context length.

Loading preview...

Overview

This model, DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus, is a 7.6 billion parameter language model created by Richard Young. It is an uncensored version of the DeepSeek-R1-Distill-Qwen-7B base model, achieved through a process called abliteration using the OBLITERATUS (advanced) method.

Key Capabilities & Characteristics

  • Abliterated (Uncensored): Specifically modified to remove refusal behaviors, as detailed in the research paper "Comparative Analysis of LLM Abliteration Methods: Scaling to MoE Architectures and Modern Tools" (arXiv:2512.13655).
  • Performance Metrics: Achieved an Attack Success Rate (ASR) of 50.0% and 50/100 refusals in abliteration testing, with a KL Divergence of 1.191.
  • Context Length: Supports a context window of 32768 tokens.
  • Research Focus: Primarily released for research purposes to study the effects and implications of removing safety guardrails from language models.

Intended Use Cases

  • Research into LLM Safety: Ideal for academic and research environments investigating model biases, safety mechanisms, and the impact of uncensored outputs.
  • Comparative Analysis: Useful for comparing the behavior of abliterated models against their original, censored counterparts.

Important Disclaimer

Users are cautioned that this model has had its safety guardrails removed and should not be used to generate harmful, illegal, or unethical content. It is strictly for research purposes.