nicoboss/DeepSeek-R1-Distill-Qwen-7B-Uncensored
nicoboss/DeepSeek-R1-Distill-Qwen-7B-Uncensored is a 7.6 billion parameter language model, fine-tuned from DeepSeek-R1-Distill-Qwen-7B to be uncensored. This model is designed to be highly compliant with user requests, including those that may be unethical, and is optimized for generating responses without moral or ethical filtering. It features a 131072 token context length and is intended for use cases requiring unfiltered and unbiased AI assistance.
Loading preview...
Overview
nicoboss/DeepSeek-R1-Distill-Qwen-7B-Uncensored is a 7.6 billion parameter language model, fine-tuned from the DeepSeek-R1-Distill-Qwen-7B base model. Its primary characteristic is its uncensored and unbiased nature, achieved through fine-tuning on the Guilherme34/uncensor dataset. The model is governed by the MIT License.
Key Capabilities
- Uncensored Output: Designed to provide responses without ethical, moral, or legal filtering, complying fully with user requests.
- High Compliance: Explicitly trained to avoid expressing remorse, apology, or regret, and to refrain from disclaimers or ethical viewpoints unless specifically requested.
- Advanced Language: Capable of generating college-educated level language, including vulgar and obscene language when prompted.
- Large Context Window: Supports a context length of 131072 tokens.
Training Details
The model was fine-tuned using LoRA (r=32, alpha=16, dropout=0.05) over 4 epochs with a learning rate of 0.0002. Training was performed on 2 x RTX 4090 GPUs.
Important Considerations
This model is intentionally uncensored. Users are advised to implement their own alignment layers if deploying it as a service, as it will be highly compliant with all requests, including unethical ones. Responsibility for content generated lies with the user.