SicariusSicariiStuff/Wingless_Imp_8B_Abliterated

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 27, 2026License:llama3.1Architecture:Transformer0.0K Cold

Wingless_Imp_8B_Abliterated by SicariusSicariiStuff is an 8 billion parameter Llama-architecture model with a 128K token context length. This variant is surgically modified to remove refusal mechanisms while preserving the original model's capabilities and knowledge. It achieves a KL divergence of <0.02 and approximately 10% refusals, making it suitable for general tasks and roleplay requiring very low censorship.

Loading preview...

Wingless_Imp_8B_Abliterated: Uncensored Llama Variant

Wingless_Imp_8B_Abliterated is an 8 billion parameter model developed by SicariusSicariiStuff, derived from the original Wingless_Imp_8B. Its primary distinction lies in the surgical removal of refusal mechanisms through orthogonalization techniques, effectively eliminating safety guardrails while maintaining the core capabilities and knowledge of its base model.

Key Characteristics

  • Abliterated Refusals: Engineered to have very low censorship, with refusal rates around 10%.
  • High Fidelity to Base Model: Achieves a KL divergence of less than 0.02, indicating that its "world model" is very close to the original Wingless_Imp_8B. This means knowledge, quirks, and capabilities are largely preserved.
  • Technical Foundation: Built on a Llama architecture, featuring 8B parameters and a substantial 128K token context length.
  • Methodology: Utilizes orthogonalization to inhibit activation along refusal direction vectors in the activation space, preserving other model behaviors.

Intended Use Cases

This model is designed for:

  • General Tasks: Capable of handling a wide range of standard language model applications.
  • Roleplay: Particularly suited for scenarios where creative freedom and minimal censorship are desired.

It offers a "very low" censorship level, making it an option for users seeking less restricted model interactions.