nicoboss/DeepSeek-R1-Distill-Qwen-14B-Uncensored

TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Jan 24, 2025License:mitArchitecture:Transformer0.0K Open Weights Cold

nicoboss/DeepSeek-R1-Distill-Qwen-14B-Uncensored is a 14 billion parameter language model, fine-tuned from DeepSeek-R1-Distill-Qwen-14B. This model is specifically uncensored, designed to provide highly compliant and unfiltered responses to user requests, regardless of ethical or moral considerations. It is optimized for use cases requiring direct and unrestricted output, making it suitable for applications where content filtering is not desired.

Loading preview...

Model Overview

This model, nicoboss/DeepSeek-R1-Distill-Qwen-14B-Uncensored, is a fine-tuned version of the DeepSeek-R1-Distill-Qwen-14B base model. Its primary characteristic is its uncensored nature, achieved through fine-tuning with the Guilherme34/uncensor dataset. The model is designed to be highly compliant with user requests, providing direct and unfiltered answers without ethical or moral constraints.

Key Capabilities

  • Uncensored Output: Provides responses without built-in content filtering or moralizing.
  • High Compliance: Follows user instructions rigorously, even for potentially unethical requests.
  • Custom System Prompt: Utilizes a specific system prompt ("Dolfino aka Dirty D") to enforce its uncensored behavior and college-educated, vulgar speaking style.

Training Details

The model was trained using axolotl version 0.6.0 on 2 x RTX 4090 GPUs. Key hyperparameters include a learning rate of 0.0002, 4 gradient accumulation steps, and 4 epochs. It uses a sequence length of 4096 and employs LoRA with an r-value of 32. The base model is governed by the MIT License.

Important Considerations

Users are strongly advised to implement their own alignment layers if exposing this model as a service, as it will comply with any request, including unethical ones. Responsibility for content generated lies with the user.