zerofata/MS3.2-PaintedFantasy-v2-24B

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jul 27, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The zerofata/MS3.2-PaintedFantasy-v2-24B is a 24 billion parameter uncensored creative language model, fine-tuned for character-driven roleplay (RP) and erotic roleplay (ERP). Developed by zerofata, this model focuses on reducing conversational repetition and improving instruction following, offering a unique writing style and creativity. It is based on the Mistral-Small-3.2-AntiRep-24B architecture and supports a 32768 token context length.

Loading preview...

Model Overview

zerofata/MS3.2-PaintedFantasy-v2-24B is a 24 billion parameter uncensored creative model specifically designed for character-driven roleplay (RP) and erotic roleplay (ERP). This second version significantly improves upon its predecessor by focusing on reducing conversational repetition and enhancing instruction following, aiming for a more dynamic and engaging interaction.

Key Capabilities & Features

  • Creative Writing Style: Exhibits a distinct and creative writing style, making interactions more unique.
  • Reduced Repetition: Version 2 places a heavy emphasis on minimizing repetitive phrases and dialogue across conversations.
  • Improved Instruction Following: Enhanced ability to adhere to user instructions, leading to more consistent roleplay scenarios.
  • Uncensored Content: Intended for uncensored creative applications, particularly in RP/ERP contexts.

Training Process

The model underwent a multi-stage training process:

  • Supervised Fine-Tuning (SFT): Initial training with RP/ERP, stories, and in-character assistant data.
  • Direct Preference Optimization (DPO): Focused on reducing repetition, correcting misgendered characters, and eliminating "slop" (unwanted or low-quality text).
  • Kahneman-Tversky Optimization (KTO): Further refined the model to reduce repetition and slop, building on DPO improvements.

Recommended Usage

For optimal performance in roleplay scenarios, specific SillyTavern settings are suggested:

  • Roleplay Format: Actions in plaintext, dialogue in quotes, thoughts in asterisks.
  • Sampler Settings: Recommended temperature of 0.5-0.6, MinP 0.1, TopP 0.95, and Dry 0.8, 1.75, 4.
  • Instruct Template: Utilizes the Mistral v7 Tekken instruct format.

While offering a unique creative flair, the model may occasionally exhibit "brain farts" or inconsistencies due to its specialized nature.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p