DavidAU/Qwen3-4B-Instruct-2507-Polaris-Alpha-Distill-Heretic-Abliterated

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Dec 9, 2025Architecture:Transformer0.0K Warm

DavidAU/Qwen3-4B-Instruct-2507-Polaris-Alpha-Distill-Heretic-Abliterated is a 4 billion parameter instruction-tuned Qwen3 model, developed by DavidAU, featuring a 40960 token context length. This model has been specifically processed using the Heretic method to significantly reduce its refusal rate to 8/100, down from an original 87/100, while maintaining a low KL divergence of 0.06 to preserve its original performance. It is optimized for uncensored content generation across all use cases, providing direct and honest answers without typical LLM refusals.

Loading preview...

Model Overview

DavidAU/Qwen3-4B-Instruct-2507-Polaris-Alpha-Distill-Heretic-Abliterated is a 4 billion parameter instruction-tuned model based on the Qwen3 architecture, featuring an extended context length of 40960 tokens. Its primary differentiator is the application of the "Heretic" method (v1.0.1) to achieve a significantly reduced refusal rate.

Key Characteristics

  • Abliterated/Uncensored: The model's refusal rate has been drastically lowered from 87/100 to 8/100, aiming for unrestricted content generation.
  • Performance Preservation: A low KL divergence of 0.06 indicates that the de-censoring process has not significantly degraded the model's root performance.
  • Context Length: Supports a substantial 40960 token context, enabling processing of longer inputs and generating more extensive outputs.
  • Freedom-Oriented: Designed to answer honestly and without judgment across all use cases, including those typically subject to censorship.

Usage Considerations

While designed for uncensored output, the model may require explicit directives or "pushing" with specific terms (e.g., slang, graphic descriptions) to generate content at expected graphic or explicit levels, especially for x-rated or highly descriptive content. This is because its default settings are "tame" despite the removal of refusals. Optimal performance can be achieved by adjusting settings like Smoothing_factor (1.5) in interfaces like KoboldCpp or oobabooga/text-generation-webui.