llmfan46/Forgotten-Transgression-24B-v4.1-ultra-uncensored-heretic

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Apr 1, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

llmfan46/Forgotten-Transgression-24B-v4.1-ultra-uncensored-heretic is a decensored version of ReadyArt/Forgotten-Transgression-24B-v4.1, created by llmfan46 using the Heretic v1.2.0 tool with Arbitrary-Rank Ablation (ARA) method. This model significantly reduces content refusals by 96% (4/100 vs 95/100) while preserving core model quality with a KL divergence of 0.0353. It is specifically optimized for generating content with fewer restrictions, making it suitable for use cases requiring uncensored outputs.

Loading preview...

Model Overview

llmfan46/Forgotten-Transgression-24B-v4.1-ultra-uncensored-heretic is a specialized language model developed by llmfan46, derived from ReadyArt's Forgotten-Transgression-24B-v4.1. Its primary distinction lies in its decensored nature, achieved through the application of the Heretic v1.2.0 tool utilizing the Arbitrary-Rank Ablation (ARA) method.

Key Differentiators

  • Significantly Reduced Refusals: This model exhibits a 96% reduction in content refusals, dropping from 95/100 in the original model to just 4/100. This makes it highly suitable for applications where content restrictions are undesirable.
  • High Fidelity to Original Capabilities: Despite the decensoring process, the model maintains a low KL divergence of 0.0353 from the original, indicating strong preservation of its baseline performance and knowledge.
  • Preserved Reasoning Abilities: Benchmark results on PIQA (Physical Intuition Question Answering) and MMLU (Massive Multitask Language Understanding) show that the model's accuracy scores (e.g., PIQA acc_norm 0.8373, MMLU acc 0.7879) are very close to the original model, confirming that its core reasoning and understanding capabilities remain largely intact.

Ideal Use Cases

  • Unrestricted Content Generation: Perfect for scenarios requiring outputs without built-in censorship or content filtering.
  • Creative and Erotic Roleplay: The original model's README mentions optimization for "coherent depravity" and "well-rounded erotic roleplaying ability," which this decensored version is designed to enhance by removing refusal behaviors.
  • Research into Model Alignment and Safety: Can be used by researchers studying the effects of decensoring techniques and the trade-offs between safety and capability.