zerofata/MS3.2-PaintedFantasy-v3-24B

Warm
Public
24B
FP8
32768
License: apache-2.0
Hugging Face
Overview

Overview

zerofata/MS3.2-PaintedFantasy-v3-24B is an uncensored 24 billion parameter language model, building upon the Magistral-Small-2509 base. This iteration, v3, incorporates substantial improvements and experimental additions to its training dataset and overall parameters, aiming for a distinct and enhanced performance in creative generation.

Key Capabilities & Training

  • Uncensored Creative Generation: Designed to excel in character-driven roleplay (RP) and erotic roleplay (ERP) scenarios.
  • Expanded Dataset: The Supervised Fine-Tuning (SFT) dataset has been significantly expanded, featuring 31 million tokens (25 million trainable) and utilizing rslora to train all modules, including lm_head and embed_tokens.
  • Diverse SFT Data: The SFT dataset includes RP/ERP, stories, in-character assistant data, anime & VTuber AMAs, and modified NSFW writing prompts.
  • DPO Refinement: Direct Preference Optimization (DPO) was applied to reduce repetition, misgendering, parroting, and general logic issues. Chosen responses were high-quality ERP/RP, while rejected responses were intentionally flawed outputs from MS3.2.
  • Optimized for Roleplay: Recommended SillyTavern settings suggest specific samplers (Temp: 0.7-0.8, MinP: 0.075, TopP: 0.95-1.00) and a plaintext action, quoted dialogue, and asterisked thoughts format.

Use Cases

  • Character-Driven Roleplay: Ideal for generating engaging and consistent character interactions.
  • Erotic Roleplay (ERP): Specifically fine-tuned for detailed and nuanced ERP scenarios.
  • Creative Writing: Suitable for generating stories and creative content, particularly those requiring dynamic character interactions.