zerofata/MS3.2-PaintedFantasy-24B

Hugging Face
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jun 24, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

zerofata/MS3.2-PaintedFantasy-24B is an experimental 24 billion parameter Mistral Small 3.2-based model with a 32768 token context length, specifically fine-tuned for character-driven roleplay (RP) and erotic roleplay (ERP). This uncensored model is designed to produce longer, narrative-heavy responses, accurately and proactively portraying characters. It incorporates a unique training process including pretraining on light novels and Frieren wiki data, followed by SFT and two stages of DPO to enhance consistency and reduce 'Mistral-isms'.

Loading preview...

Overview

zerofata/MS3.2-PaintedFantasy-24B is an experimental, uncensored 24 billion parameter model built on the Mistral Small 3.2 architecture, featuring a 32768 token context length. It is specifically designed to excel in character-driven roleplay (RP) and erotic roleplay (ERP), focusing on generating longer, narrative-rich responses with accurate and proactive character portrayals.

Key Capabilities & Training

  • Character-Driven RP/ERP: Optimized for detailed and engaging roleplay scenarios.
  • Narrative Depth: Produces extended, story-rich outputs.
  • Proactive Character Portrayal: Characters are designed to act and react authentically within the narrative.
  • Unique Pretraining: Includes a small pretraining phase on light novels and Frieren wiki data, which has shown to improve lore retention.
  • Multi-stage Fine-tuning: Underwent Supervised Fine-Tuning (SFT) with approximately 3.6 million tokens, 700 RP conversations, 1000 creative writing/instruct samples, and 100 summaries. This was followed by two stages of Direct Preference Optimization (DPO) to enhance instruction following and mitigate common 'Mistral-isms'.

Recommended Usage

This model is particularly suited for applications requiring highly creative, character-focused, and narrative-intensive text generation, especially within roleplaying contexts. Recommended settings for platforms like SillyTavern include specific formats for actions (plaintext), dialogue ("in quotes"), and thoughts (in asterisks), along with sampler parameters like Temperature (0.8), MinP (0.04-0.05), and TopP (0.95-1.0).

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p