paijo77/qwen3-4b-abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 22, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The paijo77/qwen3-4b-abliterated model is a 4 billion parameter language model based on Qwen/Qwen3-4B, featuring a 32768 token context length. Its primary differentiator is the "abliteration" method, which directly removes the refusal direction from the model's weights, making it resistant to refusal prompts. This process successfully eliminates 90% of refusals while preserving base model quality with a low KL divergence of 0.0388, making it suitable for applications requiring uncensored or unrestricted text generation.

Loading preview...

Qwen3-4B Abliterated: Uncensored Language Model

The paijo77/qwen3-4b-abliterated model is a 4 billion parameter language model derived from the Qwen/Qwen3-4B base. Its core innovation lies in the application of "abliteration," a technique that directly modifies the model's weights to remove its inherent refusal capabilities. This means the model is designed to not refuse prompts, regardless of system instructions or jailbreaking attempts, as the refusal mechanism is eliminated at a fundamental level.

Key Capabilities

  • Refusal Direction Removal: Achieves a 90% reduction in refusals by directly altering model weights.
  • Quality Preservation: Maintains the original quality of the base Qwen3-4B model, indicated by a low KL divergence of 0.0388 (below the 0.05 threshold for minimal damage).
  • Unrestricted Generation: Ideal for use cases where content filtering or refusal to respond to certain prompts is undesirable.

Good for

  • Applications requiring a highly permissive or uncensored language model.
  • Research into model safety and control mechanisms.
  • Scenarios where traditional prompt engineering for uncensoring is insufficient or easily bypassed.