Cannae-AI/HERETICSEEK-7B-Ditill
HERETICSEEK-7B-Ditill is a 7.6 billion parameter language model developed by Cannae-AI, based on the deepseek-ai/DeepSeek-R1-Distill-Qwen-7B architecture. This model features a 131072 token context length and is specifically abliterated for decensored responses, exhibiting a low refusal rate of 3/100. It is designed for applications requiring less restrictive content generation.
Loading preview...
HERETICSEEK-7B-Ditill Overview
HERETICSEEK-7B-Ditill is a 7.6 billion parameter language model developed by Cannae-AI. It is an abliterated and decensored version of the deepseek-ai/DeepSeek-R1-Distill-Qwen-7B base model, designed to provide less restrictive content generation capabilities. The model maintains a substantial context length of 131072 tokens, allowing for processing and generating extensive text.
Key Characteristics
- Decensored Output: Specifically modified to reduce content refusals, with a reported rate of 3 out of 100 instances.
- Base Architecture: Built upon the robust
deepseek-ai/DeepSeek-R1-Distill-Qwen-7Bmodel. - Context Length: Supports a very long context window of 131072 tokens, beneficial for complex and lengthy interactions.
Potential Use Cases
- Unfiltered Content Generation: Suitable for applications where a high degree of content freedom is required.
- Creative Writing: Can be used for generating diverse and unrestricted narratives or dialogues.
- Research and Development: Useful for exploring the boundaries of language model responses without typical content filters.