huihui-ai/Phi-4-mini-instruct-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.8BQuant:BF16Ctx Length:32kPublished:Mar 2, 2025License:mitArchitecture:Transformer0.0K Open Weights Warm

The huihui-ai/Phi-4-mini-instruct-abliterated is a 3.8 billion parameter instruction-tuned language model, derived from Microsoft's Phi-4-mini-instruct. This model has been specifically modified using 'abliteration' techniques to remove refusal behaviors, making it an uncensored version. It maintains a substantial 131,072 token context length, focusing on providing direct responses without content filtering. This model is primarily intended for use cases requiring an unfiltered language model based on the Phi-4-mini architecture.

Loading preview...

Overview

The huihui-ai/Phi-4-mini-instruct-abliterated is an uncensored variant of the microsoft/Phi-4-mini-instruct model. This 3.8 billion parameter instruction-tuned language model has been processed using an 'abliteration' technique, specifically designed to remove refusal behaviors from its responses. The modification aims to provide a model that delivers direct answers without the content filtering typically present in instruction-tuned models.

Key Characteristics

  • Uncensored Responses: The primary differentiator is the removal of refusal mechanisms, allowing for unfiltered output.
  • Base Model: Built upon Microsoft's Phi-4-mini-instruct architecture.
  • Parameter Count: Features 3.8 billion parameters, offering a balance between performance and computational efficiency.
  • Context Length: Supports a significant context window of 131,072 tokens.
  • Proof-of-Concept: Described as a crude, proof-of-concept implementation for refusal removal without relying on TransformerLens.

Usage

This model can be easily integrated with Ollama, requiring the latest version of the platform. Users can run it directly via the command ollama run huihui_ai/phi4-mini-abliterated.

Intended Use Cases

This model is suitable for applications where an unfiltered and direct response from an LLM is preferred or required, particularly for developers exploring the effects of refusal removal techniques on language models.