BeaverLegacy/Moistral-11B-v1
Moistral-11B-v1 by BeaverLegacy is a 10.7 billion parameter instruction-tuned model, aggressively fine-tuned from the Fimbulvert v2 architecture. This model specializes in generating "moist" and aggressive erotic roleplay (eRP) content, offering a rich vocabulary for such narratives. It is optimized for long-form text generation, capable of handling 8K context lengths, making it suitable for extended eRP scenarios.
Loading preview...
Moistral-11B-v1 Overview
Moistral-11B-v1 is a 10.7 billion parameter language model developed by BeaverLegacy, specifically fine-tuned for generating erotic roleplay (eRP) content with an "aggressive" and "moist" vocabulary. It is based on the Fimbulvert v2 model, known for its performance as a substitute for larger models.
Key Capabilities
- Aggressive eRP Generation: Excels at producing explicit and suggestive narratives, designed to "turn any old story into a Moistral masterpiece."
- Instruction Mode: When used in instruct mode, the model is designed to act as a director for fantasy scenarios, allowing users to guide the narrative.
- Long Context Handling: Trained with numerous long-form texts, including many up to 8K tokens in length, enabling it to maintain coherence over extended interactions.
Use Cases and Recommendations
This model is primarily intended for erotic roleplay and creative writing where explicit and suggestive content is desired. The developer recommends allowing Moistral to "cook" by providing minimal "moist" hints in the initial prompt. For optimal performance, the developer suggests experimenting with specific generation parameters such as temperature: 0.66, repetition_penalty: 1.1, top_p: 0.64, and rp_slp: 1 to prevent underperformance or token spitting. The model is noted as a first attempt at fine-tuning, with plans for future improvements including dataset sanitization, theme balancing, and potential context window extensions.