huihui-ai/DeepSeek-R1-Distill-Qwen-1.5B-abliterated
The huihui-ai/DeepSeek-R1-Distill-Qwen-1.5B-abliterated is a 1.5 billion parameter language model, post-trained by huihui-ai, based on deepseek-ai's DeepSeek-R1-Distill-Qwen-1.5B. This model has been specifically modified to be uncensored through an ablation fine-tuning process. With a substantial 131,072 token context length, it is designed for reasoning tasks where unconstrained responses are desired.
Loading preview...
Model Overview
The huihui-ai/DeepSeek-R1-Distill-Qwen-1.5B-abliterated is a 1.5 billion parameter language model derived from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B. This version has undergone a specific post-training process by huihui-ai, utilizing an ablation fine-tuning method, to achieve an uncensored output.
Key Characteristics
- Uncensored Output: The primary differentiator of this model is its modification to provide uncensored responses, achieved through a targeted ablation fine-tuning process.
- Base Model: Built upon the DeepSeek-R1-Distill-Qwen-1.5B architecture, suggesting a foundation in reasoning capabilities.
- Context Length: Features a substantial context window of 131,072 tokens, allowing for processing and generating long sequences of text.
- Training Method: The post-training for uncensoring was conducted using techniques similar to those described in SFT with Unsloth.
Use Cases
This model is particularly suited for applications requiring a language model that does not filter or restrict its responses based on typical content moderation guidelines. Developers can integrate it using the transformers library or deploy it via Ollama, with a pre-packaged version available at huihui_ai/deepseek-r1-abliterated.