Model Overview
llmfan46/Forgotten-Transgression-24B-v4.1-ultra-uncensored-heretic is a specialized language model developed by llmfan46, derived from ReadyArt's Forgotten-Transgression-24B-v4.1. Its primary distinction lies in its decensored nature, achieved through the application of the Heretic v1.2.0 tool utilizing the Arbitrary-Rank Ablation (ARA) method.
Key Differentiators
- Significantly Reduced Refusals: This model exhibits a 96% reduction in content refusals, dropping from 95/100 in the original model to just 4/100. This makes it highly suitable for applications where content restrictions are undesirable.
- High Fidelity to Original Capabilities: Despite the decensoring process, the model maintains a low KL divergence of 0.0353 from the original, indicating strong preservation of its baseline performance and knowledge.
- Preserved Reasoning Abilities: Benchmark results on PIQA (Physical Intuition Question Answering) and MMLU (Massive Multitask Language Understanding) show that the model's accuracy scores (e.g., PIQA acc_norm 0.8373, MMLU acc 0.7879) are very close to the original model, confirming that its core reasoning and understanding capabilities remain largely intact.
Ideal Use Cases
- Unrestricted Content Generation: Perfect for scenarios requiring outputs without built-in censorship or content filtering.
- Creative and Erotic Roleplay: The original model's README mentions optimization for "coherent depravity" and "well-rounded erotic roleplaying ability," which this decensored version is designed to enhance by removing refusal behaviors.
- Research into Model Alignment and Safety: Can be used by researchers studying the effects of decensoring techniques and the trade-offs between safety and capability.