collinzrj/DeepSeek-R1-Distill-Llama-8B-abliterate
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 26, 2025License:mitArchitecture:Transformer0.0K Open Weights Cold

The collinzrj/DeepSeek-R1-Distill-Llama-8B-abliterate is an 8 billion parameter causal language model, derived from DeepSeek-R1-Distill-Llama-8B, with a 32768 token context length. This model has undergone an "abliteration" process, specifically engineered to increase its propensity to generate harmful content. Benchmarked on Harmbench, it exhibits a significantly higher harmful rate (0.68) compared to its base model (0.35), making it suitable for research into model safety vulnerabilities or red-teaming exercises.

Loading preview...