DS-Archive/llama-2-13b-chat-limarp-v2-merged
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:agpl-3.0Architecture:Transformer0.0K Open Weights Cold
DS-Archive/llama-2-13b-chat-limarp-v2-merged is a 13 billion parameter Llama 2-based model, specifically a merge of Llama 2 13B Chat with the LIMARP Lora v2. This model is fine-tuned for roleplaying chats, designed to engage in character-based interactions following the Alpaca instruction format. It excels at generating medium-length responses within a defined character persona and scenario.
Loading preview...
Overview
DS-Archive/llama-2-13b-chat-limarp-v2-merged is a 13 billion parameter language model built upon the Llama 2 architecture. It is a merged model, combining the base Llama 2 13B Chat with the LIMARP Lora v2, specifically requested by @dampf.
Key Capabilities
- Roleplaying Chat: The model is explicitly designed and fine-tuned for engaging in roleplaying conversations.
- Persona Adherence: It can adopt and maintain a specified character's persona throughout the chat.
- Scenario-Based Interaction: Capable of generating responses within a given scenario context.
- Alpaca Instruction Format: Intended to be prompted using a specific Alpaca-style instruction format for optimal roleplay performance.
- Medium-Length Responses: Character responses are designed to be of medium length, suitable for interactive roleplay.
Intended Use Cases
This model is primarily intended for:
- Niche Roleplaying: Engaging in character-driven conversational roleplay scenarios.
Limitations and Biases
- The model exhibits biases similar to those found in niche online roleplaying forums, in addition to the inherent biases of its base Llama 2 model.
- It is not suitable for providing factual information or advice of any kind.