zarakiquemparte/zarablend-l2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 17, 2023License:otherArchitecture:Transformer0.0K Cold

Zarablend-L2-7B is a 7 billion parameter language model created by zarakiquemparte, built by merging Nous Hermes Llama2 7B (66%), Airoboros L2 7B GPT4 2.0 (34%), and LimaRP LLama2 7B Lora. This model integrates the strengths of its constituent models, supporting both Alpaca 2 and LimaRP instruction formats. It is designed for general conversational and instruction-following tasks, leveraging a 4096-token context length.

Loading preview...

Zarablend-L2-7B: A Merged 7B Language Model

Zarablend-L2-7B is a 7 billion parameter language model developed by zarakiquemparte, created through a strategic merge of three distinct models: Nous Hermes Llama2 7B (66%), Airoboros L2 7B GPT4 2.0 (34%), and LimaRP LLama2 7B Lora. This merging approach aims to combine the diverse capabilities and training data of its base models into a single, versatile instruction-following model.

Key Characteristics

  • Merged Architecture: Combines Nous Hermes Llama2 7B, Airoboros L2 7B GPT4 2.0, and LimaRP LLama2 7B Lora.
  • Instruction Format Support: Compatible with both the Alpaca 2 instruction format and the LimaRP instruction format, offering flexibility in prompting.
  • Reproducibility: The merging process can be reproduced using publicly available scripts, allowing for transparency and further experimentation.
  • Context Length: Supports a context window of 4096 tokens.

Intended Use Cases

Zarablend-L2-7B is suitable for general instruction-following tasks and conversational AI applications where a blend of capabilities from its constituent models is beneficial. Users can leverage its support for multiple instruction formats to tailor interactions. However, it is important to note that the model is not intended for providing factual information or advice, and users should refer to the original models' details for specific limitations and biases.