llmfan46/Forgotten-Transgression-24B-v4.1-uncensored-heretic
llmfan46/Forgotten-Transgression-24B-v4.1-uncensored-heretic is a decensored version of ReadyArt/Forgotten-Transgression-24B-v4.1, created by llmfan46 using the Heretic v1.2.0 tool with the Arbitrary-Rank Ablation (ARA) method. This model significantly reduces content refusals by 94% (6/100 vs 95/100) while maintaining model quality with a low KL divergence of 0.0232. It is specifically optimized for generating uncensored content, making it suitable for use cases requiring fewer content restrictions.
Loading preview...
Model Overview
llmfan46/Forgotten-Transgression-24B-v4.1-uncensored-heretic is a decensored variant of the original ReadyArt/Forgotten-Transgression-24B-v4.1 model. This version was created by llmfan46 using the Heretic v1.2.0 tool, specifically employing the Arbitrary-Rank Ablation (ARA) method to modify the model's behavior.
Key Differentiators & Performance
The primary goal of this model is to drastically reduce content refusals. It achieves a 94% reduction in refusals (6/100 compared to 95/100 for the original model), indicating a significant decrease in content restrictions. Crucially, this decensoring process maintains the original model's quality, as evidenced by a very low KL divergence of 0.0232.
Benchmark results show that the model preserves its capabilities:
- PIQA (Physical Intuition Question Answering): The model's
accandacc_normscores are nearly identical to the original, demonstrating preserved common-sense reasoning. - MMLU (Massive Multitask Language Understanding): Overall MMLU accuracy remains very close to the original model (0.7879 vs 0.7896), with minor fluctuations across various subjects, confirming that the decensoring process did not substantially degrade general knowledge or reasoning abilities.
Use Cases
This model is ideal for applications where the goal is to generate content with minimal censorship or refusal behavior, while still retaining the underlying capabilities of the base model. Users seeking a less restrictive language model for various generative tasks may find this model particularly suitable.