DavidAU/L3-Dark-Planet-8B-HERETIC-Uncensored-Abliterated

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Dec 18, 2025Architecture:Transformer0.0K Cold

DavidAU/L3-Dark-Planet-8B-HERETIC-Uncensored-Abliterated is an 8 billion parameter language model, based on the "Dark Planet 8B" architecture, that has been processed with the Heretic v1.0.1 method to significantly reduce refusal rates. This model achieves a refusal rate of 13/100 with a KL divergence of 0.0716, indicating minimal damage to its original state while enhancing its ability to generate content without censorship. It is primarily designed for use cases requiring uncensored or explicit content generation, offering freedom from typical model refusals.

Loading preview...

L3-Dark-Planet-8B-HERETIC-Uncensored-Abliterated: Abliterated for Freedom

This model, L3-Dark-Planet-8B-HERETIC-Uncensored-Abliterated, is an 8 billion parameter variant of the "Dark Planet 8B" model, specifically processed using the Heretic v1.0.1 method. The primary goal of this processing is to de-censor the model, drastically reducing its refusal rate while preserving its original performance characteristics.

Key Capabilities & Differentiators

  • Significantly Reduced Refusal Rate: The model boasts a refusal rate of 13/100, a substantial improvement from the original model's 90/100, enabling it to generate content that other models might refuse.
  • Preserved Model Integrity: With a KL divergence of 0.0716, the Heretic method ensures that the model's core functionality and "root state" are largely undamaged, maintaining performance close to its pre-abliteration state.
  • Uncensored Content Generation: Designed to generate a wider range of content, including explicit or sensitive topics, without inherent refusals. Users may need to provide specific directives or "push" the model with explicit terms to achieve the desired level of graphic or explicit output.

Optimal Usage & Settings

To maximize performance and achieve smoother operation, especially for chat and roleplay, users are advised to adjust specific settings in their inference interfaces:

  • Smoothing Factor: Set Smoothing_factor to 1.5 in KoboldCpp, oobabooga/text-generation-webui, or Silly Tavern.
  • Repetition Penalty: Optionally increase repetition penalty to 1.1-1.15 if not using the smoothing factor.

For detailed guidance on advanced settings, samplers, and parameters to optimize generation quality, refer to the provided documentation on "Maximizing Model Performance" by DavidAU.