zarakiquemparte/zaramix-l2-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer Cold

The zarakiquemparte/zaramix-l2-7b is a 7 billion parameter language model, a merge of Nous Hermes Llama2 7b (72%), Stable Beluga 7b (28%), and LimaRP LLama2 7B Lora. This model combines the characteristics of its base models, supporting both Alpaca 2 and LimaRP instruction formats. It is designed for conversational and creative text generation tasks, leveraging the strengths of its merged components.

Loading preview...

Overview

zarakiquemparte/zaramix-l2-7b is a 7 billion parameter language model created by zarakiquemparte through a unique merging process. It combines three distinct models: Nous Hermes Llama2 7b (72%), Stable Beluga 7b (28%), and LimaRP LLama2 7B Lora. This merge aims to leverage the strengths of each component, resulting in a model capable of diverse text generation.

Key Characteristics

  • Merged Architecture: Built upon Llama2, integrating Nous Hermes and Stable Beluga for foundational capabilities, and LimaRP Lora for specific fine-tuning.
  • Instruction Format Flexibility: Supports both the Alpaca 2 and LimaRP instruction formats, allowing for versatile prompting and interaction.
  • Reproducible Merge: The merging process was conducted using custom scripts provided by zarakiquemparte, enabling reproducibility.

Intended Use Cases

This model is suitable for tasks requiring conversational AI and creative text generation, benefiting from the combined characteristics of its base models. Users can experiment with different instruction formats to achieve desired outputs. It is important to note that the model is not intended for providing factual information or advice.