huihui-ai/DeepScaleR-1.5B-Preview-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Feb 14, 2025License:mitArchitecture:Transformer0.0K Open Weights Warm

The huihui-ai/DeepScaleR-1.5B-Preview-abliterated is a 1.5 billion parameter uncensored language model derived from agentica-org/DeepScaleR-1.5B-Preview. Developed by huihui-ai, this model utilizes an abliteration technique to remove refusal behaviors, serving as a proof-of-concept for uncensoring LLMs without TransformerLens. It is primarily designed for use cases requiring a less restrictive conversational model.

Loading preview...

Overview

The huihui-ai/DeepScaleR-1.5B-Preview-abliterated is a 1.5 billion parameter language model that has been specifically modified to be uncensored. It is based on the agentica-org/DeepScaleR-1.5B-Preview model. The key differentiator of this model is its use of an "abliteration" technique, a proof-of-concept method to remove refusal behaviors from an LLM without relying on TransformerLens. This approach is detailed in the remove-refusals-with-transformers project.

Key Capabilities

  • Uncensored Responses: Designed to provide responses without the typical refusal behaviors found in many LLMs.
  • Proof-of-Concept Abliteration: Demonstrates a novel method for modifying LLM behavior without complex tooling.
  • Ollama Integration: Easily deployable and usable via Ollama with a dedicated model tag (huihui_ai/deepscaler-abliterated).

Good for

  • Research into LLM Censorship: Ideal for researchers exploring methods to modify or remove safety alignments in language models.
  • Unrestricted Text Generation: Suitable for applications where a less filtered or uncensored output is desired.
  • Experimentation: A good candidate for developers and researchers looking to experiment with alternative LLM modification techniques.