The aimeri/spoomplesmaxx-27b-4500 is a 27 billion parameter supervised fine-tuned (SFT) adapter built on the Gemma 3 27B architecture, specifically from aimeri/spoomplesmaxx-base-gemma3-27b-4500. This model specializes in system-prompt following, persona consistency, and structured narrative reasoning for collaborative fiction and character roleplay. It excels at generating creative writing in both English and Brazilian Portuguese, making it suitable for interactive storytelling applications.
Loading preview...
SpoomplesMaxx 27B 4500 SFT: Creative Writing & Roleplay Specialist
The aimeri/spoomplesmaxx-27b-4500 is a supervised fine-tuning (SFT) adapter for the Gemma 3 27B architecture, specifically building upon the aimeri/spoomplesmaxx-base-gemma3-27b-4500 checkpoint. This model is part of the larger SpoomplesMaxx project, a hobbyist ML research effort focused on enhancing creative writing and roleplay capabilities in open base models.
Key Capabilities
- System-prompt following: Adheres to instructions for character behavior and narrative structure.
- Persona consistency: Maintains persistent character traits and voices across multi-turn interactions.
- Structured narrative reasoning: Manages explicit scene, character, and continuity states for coherent storytelling.
- Multilingual creative writing: Supports generation in English and Brazilian Portuguese.
Training Details
This SFT adapter was trained using two primary datasets:
spoomplesmaxx-olivia-sft: Approximately 78,000 rows of DanChat-2 formatted data, focusing on persona consistency and system-prompt following, featuring the "Olivia Costa" persona.spoomplesmaxx-rp-reasoning: Around 9,600 ShareGPT formatted entries for structured roleplay reasoning, teaching explicit narrative state tracking.
Intended Use Cases
This model is specifically designed for:
- Character roleplay and collaborative fiction.
- System-prompt-driven character following with multi-character mode-switching.
- Structured narrative generation with scene and continuity state tracking.
It is not intended for general-purpose instruction following, factual Q&A, or safety-critical applications, as it is optimized for creative writing and roleplay, and the base CPT model is uncensored. Future plans include a DPO alignment stage for further refinement.