Stormtrooperaim/Mystral-Uncensored-RP-7B is a 7 billion parameter merged language model built upon the Mistral architecture, specifically designed for uncensored responses, creative writing, and roleplay scenarios. This model integrates strengths from multiple base models, including Naphula/Warlock-7B-v3 and luvGPT/mistral-7b-uncensored, to enhance instruction following and general assistance. It offers a 4096-token context length, making it suitable for extended conversational and narrative tasks requiring nuanced, unrestricted output.
Loading preview...
Mystral-Uncensored-RP-7B Overview
Mystral-Uncensored-RP-7B is a 7 billion parameter language model created by Stormtrooperaim through a merge of several specialized models using LazyMergekit. This model is engineered to combine complementary strengths, resulting in a versatile tool for various text generation tasks.
Key Capabilities
- Uncensored Responses: Designed to provide unrestricted and unfiltered output, distinguishing it from many other models.
- Enhanced Roleplay: Integrates capabilities from models like LimaRP and Samantha to excel in roleplaying scenarios.
- Creative Writing: Optimized for generating imaginative and engaging creative content.
- Instruction Following: Demonstrates strong ability to adhere to user instructions.
- General Assistance: Capable of providing broad assistance across a range of conversational and informational queries.
Model Composition
This model is a merge of seven distinct base models, including:
- Naphula/Warlock-7B-v3
- luvGPT/mistral-7b-uncensored
- QuixiAI/samantha-mistral-instruct-7b
- SilverFan/LimaRP-daybreak-7B
The merging process utilized a ties method with Naphula/Warlock-7B-v3 as the base model, configured with specific density and weight parameters to achieve its specialized characteristics. This unique composition allows Mystral-Uncensored-RP-7B to offer a distinct blend of capabilities, particularly in areas requiring creative freedom and nuanced interaction.