Undi95/Mistral-ClaudeLimaRP-v3-7B Overview
This model, developed by Undi95, is a 7B parameter variant built upon the Mistral architecture. It is a merge of two distinct models: Norquinal's Mistral-7B-claude-chat and lemonilia's LimaRP-Mistral-7B-v0.1, with the LimaRP LoRA applied at a weight of 0.75. The primary focus of this merge is to enhance roleplaying capabilities and provide granular control over response generation.
Key Capabilities and Features
- Advanced Roleplaying: Designed for engaging in multi-turn, character-driven chat scenarios, utilizing an extended Alpaca prompt format.
- Customizable Response Lengths: A unique feature allowing users to specify desired message lengths (e.g.,
tiny, short, medium, long, huge, humongous, extreme, unlimited) directly within the prompt. This provides fine-grained control over the verbosity of the model's output, with medium being the recommended starting point. - Reproducible Length Control: The length modifier consistently influences response lengths, as demonstrated by testing data, ensuring predictable output behavior.
- Optimized Prompt Format: Employs a structured prompt format for defining character personas, user personas, and scenarios, facilitating coherent and context-aware roleplay.
Recommended Usage
This model is particularly well-suited for applications requiring interactive storytelling, character simulation, and dynamic conversational agents where control over response verbosity is crucial. It integrates well with platforms like SillyTavern, with specific settings provided for optimal performance. Suggested text generation settings include TFS 0.90-0.95, Temperature 0.70-0.85, and Repetition penalty 1.08-1.10.