zarakiquemparte/tulpar-limarp-l2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer0.0K Cold

zarakiquemparte/tulpar-limarp-l2-7b is a 7 billion parameter language model created by zarakiquemparte, formed by merging the Tulpar v0 7b base model with the LimaRP LLama2 7B Lora. This model is designed to support both a custom instruction format and the LimaRP instruction format, making it versatile for conversational and role-playing applications. Its primary use case is in generating responses based on specific instruction templates, rather than providing factual information or advice.

Loading preview...

Overview

zarakiquemparte/tulpar-limarp-l2-7b is a 7 billion parameter language model developed by zarakiquemparte. It is a merge of two distinct components: the Tulpar v0 7b base model and the LimaRP LLama2 7B Lora from July 23, 2023. This merge was performed using a custom script, allowing for the combination of their respective strengths.

Key Capabilities

  • Flexible Instruction Formats: Supports both a custom ### User: / ### Assistant: format and the <<SYSTEM>> / <<USER>> / <<AIBOT>> LimaRP instruction format, enhancing its adaptability for different conversational styles.
  • Merged Architecture: Combines the characteristics of the Tulpar v0 7b base with the LimaRP LLama2 7B Lora, potentially offering a unique blend of their original capabilities.

Good for

  • Conversational AI: Ideal for applications requiring structured dialogue generation using specific instruction templates.
  • Role-playing Scenarios: Particularly suited for use cases that benefit from the LimaRP instruction format, which often includes system prompts for character cards.
  • Experimental Merges: Useful for developers interested in exploring the results of merging different Lora adapters with a base model, as the merging process is transparent and reproducible.