zarakiquemparte/zararp-l2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Sep 4, 2023License:otherArchitecture:Transformer0.0K Cold

The zarakiquemparte/zararp-l2-7b is a 7 billion parameter merged language model, combining Nous Hermes Llama2 7b and Stable Beluga 7b with LimaRP and PIPPA ShareGPT Lora adaptations. This model is designed to support various instruction formats, including Alpaca 2 and custom roleplay-oriented prompts. It is primarily intended for conversational and role-playing applications, leveraging the strengths of its constituent models.

Loading preview...

Model Overview

zarakiquemparte/zararp-l2-7b is a 7 billion parameter language model created through a series of merges. It primarily combines Nous Hermes Llama2 7b (53%) and Stable Beluga 7b (47%) as its base. This merged base was then further enhanced by integrating LoRA versions of LimaRP LLama2 7B and PIPPA ShareGPT Subset Variation Two Lora 7b.

Key Characteristics

  • Merged Architecture: Built upon a foundation of Nous Hermes Llama2 and Stable Beluga, augmented with specialized LoRAs for LimaRP and PIPPA ShareGPT.
  • Instruction Format Flexibility: Supports multiple instruction formats, including:
    • Alpaca 2 (### Instruction: <prompt> ### Response:)
    • Custom roleplay format (SYSTEM: Do thing USER: {prompt} CHARACTER:)
    • LimaRP format (<<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>>)

Intended Use Cases

This model is particularly well-suited for applications requiring flexible instruction following and conversational interactions, especially those benefiting from the characteristics of its merged components. It is designed for use in scenarios where diverse prompting styles are common, such as role-playing or interactive storytelling. Users should note that the model is not intended for providing factual information or advice.