core-3/kuno-royale-v2-7b
kuno-royale-v2-7b is a 7 billion parameter merged language model developed by core-3, building upon SanjiWatsuki/Kunoichi-DPO-v2-7B and eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO. This model is specifically designed to strengthen roleplaying prose, offering improved performance in personal roleplay tests compared to its base model. It maintains a 4096-token context length and shows improved leaderboard metrics in specific categories like GSM8K and MMLU.
Loading preview...
Model Overview
kuno-royale-v2-7b is a 7 billion parameter merged language model created by core-3, aiming to enhance the roleplaying prose capabilities of its predecessor, SanjiWatsuki/Kunoichi-DPO-v2-7B. It integrates components from eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO, a high-scoring 7B model on the Open LLM Leaderboard. The merge was performed using LazyMergekit, combining layers from both base models.
Key Capabilities
- Enhanced Roleplaying Prose: Specifically developed to improve the quality and depth of roleplaying interactions.
- Improved Benchmarks: Demonstrates better performance in certain leaderboard metrics compared to SanjiWatsuki/Kunoichi-DPO-v2-7B, particularly in GSM8K and MMLU.
- Silly Tavern Compatibility: Works effectively with the Noromaid template recommended for Kunoichi-7B, providing context and instruct configurations.
When to Use This Model
- Roleplaying Applications: Ideal for scenarios requiring nuanced and engaging character interactions.
- Creative Writing: Suitable for generating descriptive and immersive narrative content.
- Personal Use Cases: Promising for individual roleplay tests and applications where prose quality is paramount.