Greytechai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kArchitecture:Transformer Warm

Greytechai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 is a 14.8 billion parameter language model, derived from deepseek-ai/DeepSeek-R1-Distill-Qwen-14B. This version has undergone an 'abliteration' process to remove refusal behaviors, making it an uncensored variant. It is designed for general language tasks where direct and unfiltered responses are preferred, offering a proof-of-concept for refusal removal techniques.

Loading preview...