Undi95/ReMM-v1-LRPSGPT-2Char-13B
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

Undi95/ReMM-v1-LRPSGPT-2Char-13B is a 13 billion parameter language model based on ReMM v1, specifically merged with ShareGPT-13b-qloras. It is fine-tuned for generating dialogues between two distinct characters, making it highly specialized for roleplay scenarios involving two personas. This model excels at maintaining consistent character traits and conversational flow within a two-character context, utilizing a custom prompt template for defining personas and scenarios.

Loading preview...

Model Overview

Undi95/ReMM-v1-LRPSGPT-2Char-13B is a 13 billion parameter language model built upon the ReMM v1 architecture. Its core differentiation lies in its specialized fine-tuning using the ShareGPT-13b-qloras, specifically optimized for "2 characters" interactions.

Key Capabilities

  • Two-Character Roleplay: Designed to generate coherent and consistent dialogue between two distinct personas.
  • Persona Management: Utilizes a custom prompt template that allows for detailed definition of two character personas and a specific scenario.
  • Contextual Consistency: Aims to maintain the defined traits and conversational style for each character throughout the interaction.

Good For

  • Interactive Storytelling: Ideal for applications requiring dynamic, character-driven narratives with two participants.
  • Roleplay Simulations: Excellent for creating engaging roleplay experiences where two distinct characters interact.
  • Dialogue Generation: Suited for generating conversations that adhere to specific character profiles and situational contexts.

This model is particularly effective when used with character cards containing "TWO PERSONAS" to leverage its specialized training. More detailed information on the LoRA used and prompt template can be found in the original LoRA repository and its associated README.