Undi95/OpenRP-13B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 11, 2023License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Warm

Undi95/OpenRP-13B is a 13 billion parameter experimental language model developed by Undi95, built through a multi-step merge process combining OpenOrca, Pygmalion, MLewd, and Spicyboros models, with a final application of Limarp2. This model is specifically designed to function as a roleplay model, aiming to avoid censorship and enhance creative writing capabilities. It features a 4096-token context length and demonstrates an average performance of 53.25 on the Open LLM Leaderboard.

Loading preview...

Undi95/OpenRP-13B: An Experimental Roleplay Model

Undi95/OpenRP-13B is a highly experimental 13 billion parameter language model, developed by Undi95, with a primary focus on roleplay capabilities and reduced censorship. The model's unique architecture is the result of a complex, multi-stage merging process:

Key Development Steps:

  • Initial Merges: Combined Open-Orca/OpenOrcaxOpenChat-Preview2-13B with PygmalionAI/pygmalion-2-13b to create OpenOrcaPyg2. Separately, Undi95/MLewd-L2-13B-v2-3 was merged with jondurbin/spicyboros-13b-2.2 to form MLewdBorosPlus.
  • Layered Merges: Specific layers (0-8 with MLewd, 16-20 with Spicyboros) were integrated into both OpenOrcaPyg2 and MLewdBorosPlus to create OpenOrcaPyg2-Layered and MLewdBorosPlus-Layered.
  • Final Composition: These layered models were then merged to form OpenRPBase, followed by the application of lemonilia/limarp-llama2-v2 at a 0.5 weight to produce the final OpenRP-13B model.

Performance & Characteristics:

Despite its experimental nature, OpenRP-13B achieves an average score of 53.25 on the Open LLM Leaderboard, with notable scores including 62.12 on ARC (25-shot) and 82.6 on HellaSwag (10-shot). The model has a 4096-token context length. It is specifically engineered to leverage the Pygmalion-2 dataset for roleplay and integrate MLewd and Spicyboros layers to enhance its creative writing and uncensored responses. Users should note a reported "obsession with the game 'Garry's mod'" as a known quirk.

When to Use This Model:

  • Experimental Roleplay: Ideal for developers and users interested in exploring advanced, less-censored roleplay scenarios.
  • Creative Writing: Suitable for tasks requiring imaginative and unconstrained text generation.
  • Research into Merged Architectures: Provides a case study for complex model merging strategies aimed at specific behavioral outcomes.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p