richardyoung/DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold
The richardyoung/DeepSeek-R1-Distill-Qwen-7B-abliterated-obliteratus is a 7.6 billion parameter language model, derived from DeepSeek-R1-Distill-Qwen-7B, that has undergone an 'abliteration' process to remove refusal behaviors. Developed by Richard Young using the OBLITERATUS (advanced) method, this model is specifically engineered to be uncensored. It is intended for research into LLM safety guardrails and the effects of refusal behavior removal, offering a 32768 token context length.
Loading preview...