zarakiquemparte/zarablend-1.1-l2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 26, 2023License:otherArchitecture:Transformer0.0K Cold

Zarablend 1.1 L2 7b is a 7 billion parameter language model created by zarakiquemparte, built by merging Nous Hermes Llama2 7b, Airoboros L2 7B GPT4 2.0, and LimaRP LLama2 7B. This model leverages the strengths of its constituent models, offering flexibility in instruction formats including Alpaca 2 and LimaRP. It is designed for general language tasks, particularly those benefiting from a blend of diverse fine-tuning approaches.

Loading preview...

Model Overview

Zarablend 1.1 L2 7b is a 7 billion parameter language model developed by zarakiquemparte. It is a unique blend created through a multi-stage merging process:

  • Base Model: Nous Hermes Llama2 7b (66% contribution)
  • First Merge Component: Airoboros L2 7B GPT4 2.0 (34% contribution)
  • Second Merge Component: LimaRP LLama2 7B Lora version

This merging process was facilitated by custom scripts, allowing for the combination of different fine-tuning methodologies. The model's architecture is rooted in the Llama2 family, inheriting its foundational capabilities.

Key Capabilities & Usage

Due to its merged heritage, Zarablend 1.1 L2 7b supports multiple instruction formats, providing flexibility for users:

  • Alpaca 2 Format:
    ### Instruction:
    <prompt>
    
    ### Response:
    <leave a newline blank for model to respond>
  • LimaRP Format:
    <<SYSTEM>>
    <character card and system prompt>
    
    <<USER>>
    <prompt>
    
    <<AIBOT>>
    <leave a newline blank for model to respond>

Limitations

It is important to note that this model is not intended for providing factual information or advice. Users should be aware of potential biases and risks inherent in merged language models. The model's training details are derived from its constituent models, and its behavior reflects the combined characteristics of Nous Hermes, Airoboros, and LimaRP.