llmfan46/MS3.2-PaintedFantasy-v4.1-24B-ultra-uncensored-heretic-v1

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 22, 2026License:mitArchitecture:Transformer0.0K Open Weights Cold

llmfan46/MS3.2-PaintedFantasy-v4.1-24B-ultra-uncensored-heretic-v1 is a 24B parameter model, a decensored version of zerofata/MS3.2-PaintedFantasy-v4.1-24B, created using the Heretic v1.2.0 Arbitrary-Rank Ablation (ARA) method. This model significantly reduces refusals (1/100 vs 80/100) while preserving original model quality with a KL divergence of 0.0060. It is specifically optimized for creative character-driven roleplay (RP) and erotic roleplay (ERP) by addressing repetition in assistant messages.

Loading preview...

Overview

llmfan46/MS3.2-PaintedFantasy-v4.1-24B-ultra-uncensored-heretic-v1 is a 24B parameter model, a decensored variant of the original zerofata/MS3.2-PaintedFantasy-v4.1-24B. It was created using the Heretic v1.2.0 Arbitrary-Rank Ablation (ARA) method, specifically targeting attn.o_proj components across layers 4 to 39.

Key Differentiators & Performance

  • Decensored Output: Achieves a significant reduction in refusals, with only 1/100 compared to the original model's 80/100, indicating fewer content restrictions.
  • Quality Preservation: Maintains high model quality with a low KL divergence of 0.0060, ensuring outputs remain close to the original's baseline.
  • Repetition Reduction: The model's training process included heavy filtering and rewriting of repetitive assistant messages, aiming to improve creative output and reduce pattern-like responses.
  • Physical Reasoning: Demonstrates preserved physical reasoning capabilities, with PIQA acc_norm scores identical to the original model (0.8303).

Training & Optimization

The model underwent a SFT > DPO creation process. SFT involved approximately 25 million tokens (17.5 million trainable) from diverse datasets including SFW/NSFW RP, stories, and creative instruct data. DPO expanded to include non-creative datasets like cybersecurity and general assistant/chat preference data to stabilize the model, while using embeddings to remove DPO samples that encouraged repetition.

Recommended Use

This model is primarily intended for creative character-driven roleplay (RP) and erotic roleplay (ERP), offering a less censored experience with improved creative flow. Recommended SillyTavern settings include plaintext for actions, quotes for dialogue, and asterisks for thoughts, with specific sampler settings (Temp: 0.8, MinP: 0.05-0.075, TopP: 0.95-1.00).