saishf/Multi-Verse-RP-7B
saishf/Multi-Verse-RP-7B is a 7 billion parameter language model created by saishf, merged using the task arithmetic method from several Mistral-based models. This experimental merge is specifically optimized for roleplay scenarios, demonstrating strong capabilities in handling non-human characters and distinguishing actions. It performs well with Alpaca and ChatML instruction formats, achieving an average score of 74.73 on the Open LLM Leaderboard.
Loading preview...
Multi-Verse-RP-7B: A Roleplay-Optimized 7B Merge
saishf/Multi-Verse-RP-7B is an experimental 7 billion parameter language model created by saishf through a merge of several pre-trained Mistral-based models. Utilizing the task arithmetic merge method, this model integrates various specialized LoRAs from jeiku with ammarali32/multi_verse_model as its base, aiming for enhanced roleplay capabilities.
Key Capabilities
- Roleplay Specialization: Demonstrates strong performance in roleplay scenarios, particularly in handling non-human characters and accurately separating human from non-human actions.
- Instruction Format Versatility: Compatible with both Alpaca and ChatML instruction formats, with optimal performance noted using Alpaca due to the nature of the merged LoRAs.
- Competitive Performance: Achieved an average score of 74.73 on the Open LLM Leaderboard, including 72.35 on AI2 Reasoning Challenge and 88.37 on HellaSwag.
Good For
- Developers and users seeking a 7B model specifically fine-tuned for creative roleplay applications.
- Scenarios requiring nuanced handling of diverse character types and actions within a narrative.
- Exploratory use in experimental LLM merges, showcasing the potential of task arithmetic for specialized outcomes.