Moistral-11B-v2 by BeaverLegacy is a 10.7 billion parameter language model, fine-tuned from Fimbulvetr-11B-v2, specifically designed for erotic roleplay (eRP) with an emphasis on generating "moist" and descriptive long-form responses. It features a rebalanced dataset including diverse genres and perspectives, sanitized training data for cleaner output, and improved chat/instruct modes, making it suitable for interactive narrative generation in eRP contexts.
Loading preview...
Moistral-11B-v2: An eRP Focused Language Model
Moistral-11B-v2, developed by BeaverLegacy, is a 10.7 billion parameter model fine-tuned from the Fimbulvetr-11B-v2 architecture. This iteration focuses on enhancing its capabilities for erotic roleplay (eRP) by generating rich, descriptive, and long-form narratives, with a context length of 4096 tokens.
Key Enhancements & Features
- Expanded Training Data: Trained on a larger dataset of "moist" content, specifically designed to produce extended responses.
- Genre and Perspective Rebalancing: The training data now includes a wider array of genres (romance, fantasy, sci-fi) and better representation of male and female perspectives.
- Data Sanitization: The dataset was meticulously cleaned to remove special characters, overly long ellipses, author notes, and inconsistent quotation marks, aiming for higher quality output.
- Reduced GPTisms: Prioritizes human-written stories from past decades to minimize generic AI-like phrasing.
- Improved Formatting: Utilizes Alpaca formatting for better performance in Chat and Instruct modes.
- Long-Form Generation: Capable of generating lengthy and coherent narratives without frequent turn-taking.
Use Cases & Differentiators
Moistral-11B-v2 is specifically tailored for interactive eRP scenarios. It excels at transforming user prompts into detailed, "moist" narratives. Users can leverage its Instruct Mode to direct the story, acting as a "director of your own fantasy ride." For those seeking less explicit content, the model can be guided with "dryer" input stories or by adjusting parameters like temperature (e.g., temp 0.5, top p 0.8 for more coherent, less "moist" output). GGUF quantizations are also available, including "Dried" versions for reduced "moistness."