zerofata/MS3.2-PaintedFantasy-v4.1-24B
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 16, 2026License:mitArchitecture:Transformer0.0K Open Weights Warm
zerofata/MS3.2-PaintedFantasy-v4.1-24B is a 24 billion parameter uncensored language model, fine-tuned for creative character-driven roleplay (RP) and erotic roleplay (ERP) with a 32768 token context length. This model, based on the Magistral Small 2509 architecture, was developed by zerofata with a focus on reducing repetition in assistant messages through heavy dataset filtering and rewriting. It utilizes a Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) process, incorporating diverse datasets including SFW/NSFW RP, stories, and creative instruct data, making it particularly adept at generating dynamic and varied narrative content.
Loading preview...