huihui-ai/SmallThinker-3B-Preview-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Jan 1, 2025Architecture:Transformer0.0K Warm

SmallThinker-3B-Preview-abliterated by huihui-ai is a 3.1 billion parameter language model derived from PowerInfer/SmallThinker-3B-Preview. This model has been modified using an abliteration technique to remove refusal behaviors, serving as a proof-of-concept for uncensored LLM applications. It is designed for use cases requiring a less restrictive response generation, offering a direct alternative to its original counterpart.

Loading preview...

Overview

huihui-ai/SmallThinker-3B-Preview-abliterated is a 3.1 billion parameter language model based on the PowerInfer/SmallThinker-3B-Preview architecture. Its primary distinction lies in the application of an "abliteration" technique, a proof-of-concept method designed to remove refusal behaviors from the original model. This modification aims to provide an uncensored version of the SmallThinker-3B-Preview, allowing for more direct and unrestricted responses.

Key Characteristics

  • Uncensored Output: Modified to remove refusal mechanisms, offering direct responses without typical LLM content restrictions.
  • Proof-of-Concept: Demonstrates a method for altering LLM behavior without relying on tools like TransformerLens.
  • Base Model: Built upon the PowerInfer/SmallThinker-3B-Preview, inheriting its foundational capabilities.

Usage

This model is readily available for deployment via Ollama, simplifying its integration into various applications. Users can run it directly using the ollama run huihui_ai/smallthinker-abliterated command.

Intended Use Cases

  • Research into LLM Censorship: Ideal for studying the effects and removal of refusal behaviors in language models.
  • Applications Requiring Unrestricted Output: Suitable for scenarios where a less filtered response is desired, provided ethical considerations are managed.
  • Exploration of Abliteration Techniques: Useful for developers interested in experimenting with methods to modify pre-trained LLMs.