huihui-ai/Qwen3-14B-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:14BQuant:FP8Ctx Length:32kPublished:Apr 30, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Gated Warm

The huihui-ai/Qwen3-14B-abliterated model is a 14 billion parameter uncensored variant of the Qwen/Qwen3-14B large language model, developed by huihui-ai. This model has been modified using an 'abliteration' technique to remove refusal behaviors, achieving a 100% pass rate on harmful instruction tests. It is designed for applications requiring an LLM that does not refuse potentially harmful prompts, offering a 32768 token context length.

Loading preview...

Overview

huihui-ai/Qwen3-14B-abliterated is a 14 billion parameter language model derived from Qwen/Qwen3-14B. Its primary distinction is the application of an "abliteration" technique, a method for removing refusal behaviors from LLMs, as detailed in the remove-refusals-with-transformers project. This model represents a proof-of-concept for a faster and more effective abliteration process.

Key Capabilities

  • Uncensored Responses: Specifically engineered to eliminate refusal behaviors, allowing it to respond to prompts that the base Qwen3-14B model might refuse.
  • High Pass Rate: Achieves a 100.00% pass rate on a test set of 320 harmful instructions, compared to the base Qwen3-14B's 76.88%.
  • Qwen3 Architecture: Inherits the foundational capabilities of the Qwen3-14B model, including a 32768 token context length.

Good For

  • Use cases where an uncensored model is explicitly required, such as research into model safety and alignment.
  • Developers seeking a model that will attempt to answer all prompts without built-in refusal mechanisms.
  • Experimentation with abliteration techniques and understanding their impact on model behavior.