Undi95/OpenRP-13B: An Experimental Roleplay Model
Undi95/OpenRP-13B is a highly experimental 13 billion parameter language model, developed by Undi95, with a primary focus on roleplay capabilities and reduced censorship. The model's unique architecture is the result of a complex, multi-stage merging process:
Key Development Steps:
- Initial Merges: Combined
Open-Orca/OpenOrcaxOpenChat-Preview2-13B with PygmalionAI/pygmalion-2-13b to create OpenOrcaPyg2. Separately, Undi95/MLewd-L2-13B-v2-3 was merged with jondurbin/spicyboros-13b-2.2 to form MLewdBorosPlus. - Layered Merges: Specific layers (0-8 with MLewd, 16-20 with Spicyboros) were integrated into both
OpenOrcaPyg2 and MLewdBorosPlus to create OpenOrcaPyg2-Layered and MLewdBorosPlus-Layered. - Final Composition: These layered models were then merged to form
OpenRPBase, followed by the application of lemonilia/limarp-llama2-v2 at a 0.5 weight to produce the final OpenRP-13B model.
Performance & Characteristics:
Despite its experimental nature, OpenRP-13B achieves an average score of 53.25 on the Open LLM Leaderboard, with notable scores including 62.12 on ARC (25-shot) and 82.6 on HellaSwag (10-shot). The model has a 4096-token context length. It is specifically engineered to leverage the Pygmalion-2 dataset for roleplay and integrate MLewd and Spicyboros layers to enhance its creative writing and uncensored responses. Users should note a reported "obsession with the game 'Garry's mod'" as a known quirk.
When to Use This Model:
- Experimental Roleplay: Ideal for developers and users interested in exploring advanced, less-censored roleplay scenarios.
- Creative Writing: Suitable for tasks requiring imaginative and unconstrained text generation.
- Research into Merged Architectures: Provides a case study for complex model merging strategies aimed at specific behavioral outcomes.