zarakiquemparte/zarafusionex-1.1-l2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Aug 25, 2023License:otherArchitecture:Transformer0.0K Cold

Zarakiquemparte/zarafusionex-1.1-l2-7b is a 7 billion parameter language model created by zarakiquemparte, formed by merging Nous Hermes Llama2 7b and Stable Beluga 7b, then further merged with LimaRP LLama2 7B Lora. This model leverages the strengths of its constituent models, offering a versatile base for various text generation tasks. It is particularly suited for applications requiring flexible instruction formats, including Alpaca 2 and LimaRP styles.

Loading preview...

Zarafusionex 1.1 L2 7b Overview

Zarakiquemparte/zarafusionex-1.1-l2-7b is a 7 billion parameter language model developed by zarakiquemparte. This model is a sophisticated merge of several existing models, combining the capabilities of Nous Hermes Llama2 7b (53%) and Stable Beluga 7b (47%). The resulting merge was then further integrated with the LimaRP LLama2 7B Lora version from July 23, 2023.

Key Characteristics

  • Merged Architecture: Built upon a foundation of Nous Hermes Llama2 7b and Stable Beluga 7b, enhanced with LimaRP LLama2 7B Lora.
  • Reproducible Merging Process: The model's creation process, including the merging of base models and the application of Lora, is transparent and reproducible using provided scripts.
  • Flexible Instruction Formats: Supports multiple instruction formats, including the Alpaca 2 format and the LimaRP instruction format, making it adaptable to different prompting styles.

Usage Considerations

  • Instruction Following: Designed to respond effectively to prompts formatted in either Alpaca 2 or LimaRP styles.
  • Limitations: This model is not intended for providing factual information or advice, emphasizing its role in creative or conversational text generation rather than knowledge retrieval.
  • Quantized Versions Available: Optimized versions for various hardware are available through @TheBloke, including GGML, GGUF, and GPTQ formats.