The aimeri/spoomplesmaxx-base-gemma3-27b is a 27 billion parameter Gemma 3 architecture model developed by aimeri, specifically designed for continued pre-training (CPT). It targets enhanced creative writing, character voice, narrative prose, and multilingual fluency in English and Brazilian Portuguese. This base model is intended for direct use in creative text generation and as a foundation for further supervised fine-tuning (SFT) and direct preference optimization (DPO) stages. It excels in generating diverse and coherent text for creative applications.
Loading preview...
SpoomplesMaxx Base - Gemma 3 27B Overview
The aimeri/spoomplesmaxx-base-gemma3-27b is a 27 billion parameter base model, part of the SpoomplesMaxx project by aimeri, focused on creative writing and roleplay. It is a continued pre-training (CPT) checkpoint built upon unsloth/gemma-3-27b-pt, specifically optimized for generating high-quality creative text.
Key Capabilities & Features
- Creative Text Generation: Specialized in producing narrative prose, character voices, and creative writing content.
- Multilingual Support: Enhanced fluency in both English (
en) and Brazilian Portuguese (pt). - Base Model: Serves as a foundation for further fine-tuning (SFT/DPO) stages, which are currently under development.
- Architecture: Based on the robust Gemma 3 27B architecture.
Intended Uses
This model is suitable for:
- Direct Use: Generating creative writing, character roleplay, collaborative fiction, and multilingual text in English and Portuguese.
- Downstream Fine-tuning: Ideal as a base for custom SFT and DPO fine-tuning to create instruction-following or chat models.
Training Details
The model was trained using LoRA on text layers, leveraging a combined corpus of aimeri/spoomplesmaxx-cpt-raw-small (broad creative writing) and characters_small.jsonl (character-focused entries). The training sequences were chunked into fixed-length segments of 16,384 tokens.
Limitations
As a base model, it does not follow instructions and lacks a chat template. It is not intended as a drop-in replacement for instruction-following or chat models, nor for tasks requiring strict factual grounding or safety constraints without further alignment.