IceMoonshineRP-7b: Roleplay Optimized Mistral v0.2 Merge
IceMoonshineRP-7b is a 7 billion parameter language model developed by icefog72, built upon the Mistral v0.2 base architecture. It is a merge of several pre-trained roleplay-focused models, created using mergekit to enhance its capabilities in interactive storytelling and character interaction.
Key Capabilities & Features
- Roleplay Specialization: Specifically fine-tuned and optimized for generating high-quality, consistent roleplay narratives and character responses.
- Context Handling: While the model has a 32k token context limit, it is recommended to use up to 21k tokens for optimal performance, with quality degradation noted beyond this point.
- SillyTavern Integration: Provides explicit guidance and recommended settings, including rules and formatting presets, for seamless integration and superior performance with SillyTavern.
- Planning Mechanism: Supports an optional "planning" feature for NPCs, allowing for more controlled and less error-prone reasoning compared to pure reasoning approaches.
- Flexible Formatting: Emphasizes the importance of clean prompt formatting for smaller models, offering advice on structuring prompts and character cards for best results.
Recommended Use Cases
- Interactive Roleplay: Ideal for users seeking a dedicated model for engaging in detailed and immersive text-based roleplay scenarios.
- SillyTavern Users: Particularly well-suited for users of SillyTavern looking for a model with tailored settings and performance optimizations.
- Character Generation: Excels at maintaining character consistency and narrative flow within roleplay contexts.
Quantized versions (EXL2 and GGUF) are available for various hardware configurations, with specific instructions provided for running the model efficiently using tools like KoboldCpp.