zarakiquemparte/zararp-1.1-l2-7b
The zarakiquemparte/zararp-1.1-l2-7b is a 7 billion parameter merged language model, combining Nous Hermes Llama2 7b and Stable Beluga 7b with LimaRP Llama2 v2 Lora and PIPPA ShareGPT Subset Variation Two Lora. This model is specifically designed for roleplaying and conversational tasks, supporting various instruction formats including Alpaca 2 and custom roleplay prompts. It leverages its merged architecture to enhance interactive dialogue generation within a 4096-token context window.
Loading preview...
Overview
ZaraRP 1.1 L2 7b is a 7 billion parameter language model created by zarakiquemparte through a sophisticated merging process. It combines several base models and LoRAs to achieve its specialized capabilities. The core merge involves Nous Hermes Llama2 7b (53%) and Stable Beluga 7b (47%), further enhanced by merging with LimaRP Llama2 v2 Lora 7b and PIPPA ShareGPT Subset Variation Two Lora 7b.
Key Capabilities
- Roleplaying and Conversational AI: Optimized for generating interactive dialogues and engaging in character-based roleplay scenarios.
- Flexible Instruction Formats: Supports multiple instruction formats, including Alpaca 2, a custom
SYSTEM: USER: CHARACTER:format, and a detailed Alpaca LimaRP format for nuanced character interactions. - Merged Architecture: Utilizes a unique merging approach, combining strengths from different foundational models and fine-tuning layers.
Usage Considerations
- Instruction Formats: Users should adhere to the specified instruction formats (Alpaca 2, Custom, Alpaca LimaRP) for optimal performance.
- Limitations: This model is not intended for providing factual information or advice. Its primary strength lies in creative and interactive text generation.
Training Details
The model is a result of a merging process, which can be reproduced using specific scripts provided by zarakiquemparte. Further details on the constituent models can be found via the links in the original model card.