Masterjp123/MasterRP-V1-L2-13B

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

Masterjp123/MasterRP-V1-L2-13B is a 13 billion parameter language model created by Masterjp123, specifically designed and merged for an enhanced role-playing experience. This model integrates components from several established role-playing models including REMM, Mlewd, Emerhyist, PsymMedRP, and magdump. It supports both Alpaca and LimaRP-V3 formatting, making it suitable for diverse role-playing applications.

Loading preview...

MasterRP-V1-L2-13B: A Merged Model for Role-Playing

Masterjp123/MasterRP-V1-L2-13B is a 13 billion parameter language model developed by Masterjp123 with the explicit goal of providing a superior role-playing (RP) experience. This model is a result of merging several well-regarded RP-focused models, including REMM, Mlewd, Emerhyist, PsymMedRP, and magdump, among others.

Key Capabilities & Features

  • Optimized for Role-Playing: The core design principle is to deliver a robust and engaging role-playing experience, leveraging the strengths of multiple specialized RP models.
  • Flexible Formatting: Supports both the widely used Alpaca instruction format and the LimaRP-V3 format, offering versatility for developers and users.
  • Merged Architecture: Created using the ModelREVOLVER tool to combine various models, aiming to consolidate their respective strengths into a single, cohesive unit.

Intended Use Cases

  • Enhanced Role-Playing Scenarios: Ideal for applications requiring detailed, immersive, and nuanced character interactions.
  • Creative Storytelling: Can be utilized for generating dynamic narratives and dialogues within role-playing contexts.

Limitations

  • The creator notes current limitations in quantizing the model due to experience level, suggesting that users interested in quantization would need to perform this step independently while providing proper attribution.