huihui-ai/QwQ-32B-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Mar 7, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

huihui-ai/QwQ-32B-abliterated is a 32.8 billion parameter uncensored variant of the Qwen/QwQ-32B model, developed by huihui-ai. This model is specifically engineered to remove refusal behaviors from the original LLM without using TransformerLens, serving as a proof-of-concept for abliteration techniques. It is primarily designed for applications requiring an LLM that does not exhibit typical refusal responses.

Loading preview...

Overview

huihui-ai/QwQ-32B-abliterated is a 32.8 billion parameter language model derived from Qwen/QwQ-32B. Its core innovation lies in its "abliteration" process, a proof-of-concept implementation designed to remove refusal behaviors from the base LLM without relying on TransformerLens. This makes it distinct from other models that might still exhibit content refusal or safety-oriented filtering.

Key Capabilities

  • Uncensored Responses: Engineered to provide direct answers without typical LLM refusals.
  • Abliteration Technique: Demonstrates a novel method for modifying model behavior at a fundamental level.
  • Ollama Support: Easily deployable via Ollama, with various quantization levels (Q2_K to fp16) available.

Good For

  • Use cases requiring an LLM that avoids refusal responses.
  • Developers interested in exploring model abliteration techniques.
  • Applications where direct, unfiltered information is prioritized.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p