Undi95/OpenRP-13B
Undi95/OpenRP-13B is a 13 billion parameter experimental language model developed by Undi95, built through a multi-step merge process combining OpenOrca, Pygmalion, MLewd, and Spicyboros models, with a final application of Limarp2. This model is specifically designed to function as a roleplay model, aiming to avoid censorship and enhance creative writing capabilities. It features a 4096-token context length and demonstrates an average performance of 53.25 on the Open LLM Leaderboard.
Loading preview...
Undi95/OpenRP-13B: An Experimental Roleplay Model
Undi95/OpenRP-13B is a highly experimental 13 billion parameter language model, developed by Undi95, with a primary focus on roleplay capabilities and reduced censorship. The model's unique architecture is the result of a complex, multi-stage merging process:
Key Development Steps:
- Initial Merges: Combined
Open-Orca/OpenOrcaxOpenChat-Preview2-13BwithPygmalionAI/pygmalion-2-13bto createOpenOrcaPyg2. Separately,Undi95/MLewd-L2-13B-v2-3was merged withjondurbin/spicyboros-13b-2.2to formMLewdBorosPlus. - Layered Merges: Specific layers (0-8 with MLewd, 16-20 with Spicyboros) were integrated into both
OpenOrcaPyg2andMLewdBorosPlusto createOpenOrcaPyg2-LayeredandMLewdBorosPlus-Layered. - Final Composition: These layered models were then merged to form
OpenRPBase, followed by the application oflemonilia/limarp-llama2-v2at a 0.5 weight to produce the finalOpenRP-13Bmodel.
Performance & Characteristics:
Despite its experimental nature, OpenRP-13B achieves an average score of 53.25 on the Open LLM Leaderboard, with notable scores including 62.12 on ARC (25-shot) and 82.6 on HellaSwag (10-shot). The model has a 4096-token context length. It is specifically engineered to leverage the Pygmalion-2 dataset for roleplay and integrate MLewd and Spicyboros layers to enhance its creative writing and uncensored responses. Users should note a reported "obsession with the game 'Garry's mod'" as a known quirk.
When to Use This Model:
- Experimental Roleplay: Ideal for developers and users interested in exploring advanced, less-censored roleplay scenarios.
- Creative Writing: Suitable for tasks requiring imaginative and unconstrained text generation.
- Research into Merged Architectures: Provides a case study for complex model merging strategies aimed at specific behavioral outcomes.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.