huihui-ai/Arcee-Blitz-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 1, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The huihui-ai/Arcee-Blitz-abliterated model is an uncensored, 24 billion parameter language model derived from arcee-ai/Arcee-Blitz, featuring a 32768 token context length. It was created using an abliteration technique to remove refusal behaviors, serving as a proof-of-concept for uncensoring LLMs without TransformerLens. This model is specifically designed for applications requiring a less restrictive conversational output.

Loading preview...

Model Overview

huihui-ai/Arcee-Blitz-abliterated is a 24 billion parameter large language model, based on the original arcee-ai/Arcee-Blitz. Its primary distinction lies in its uncensored nature, achieved through an 'abliteration' process. This technique aims to remove refusal behaviors from the base model, offering a proof-of-concept for modifying LLM responses without relying on TransformerLens.

Key Capabilities

  • Uncensored Output: Designed to provide responses without the typical refusal mechanisms found in many LLMs.
  • Proof-of-Concept: Demonstrates an alternative method for modifying model behavior, specifically for removing content restrictions.
  • Ollama Integration: Easily deployable and usable via Ollama with a dedicated model available.

Use Cases

This model is particularly suited for:

  • Research into LLM Censorship: Exploring methods for altering model safety and refusal behaviors.
  • Applications requiring unrestricted text generation: For developers and researchers who need a model with fewer built-in content filters.
  • Experimentation with abliteration techniques: Understanding the impact and effectiveness of this specific uncensoring method.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p