zarakiquemparte/beluga-limarp-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer0.0K Cold

The zarakiquemparte/beluga-limarp-7b is a 7 billion parameter language model based on the Llama 2 architecture. It is a merge of the Stable Beluga 7b and LimaRP Llama2 7B models, created by zarakiquemparte. This model is designed for general language generation tasks, leveraging the combined strengths of its base models.

Loading preview...

Model Overview

The zarakiquemparte/beluga-limarp-7b is a 7 billion parameter language model built upon the Llama 2 architecture. This model is a result of merging two distinct base models: Stable Beluga 7b and LimaRP Llama2 7B. The merging process was executed using a custom script, indicating a specific combination of their respective strengths.

Key Characteristics

  • Architecture: Llama 2 base.
  • Parameter Count: 7 billion parameters.
  • Origin: Merged from Stable Beluga 7b and LimaRP Llama2 7B.
  • Development: Created by zarakiquemparte through a specific merging process.

Intended Use and Limitations

This model is primarily intended for general language generation. Users should be aware that, like its constituent models, it is not designed for providing factual information or advice in any capacity. Its utility lies in its ability to generate coherent and contextually relevant text based on its training and merged characteristics.

Reproducibility

The merging process for this model is reproducible using the provided script. For detailed information regarding the base models, users are encouraged to refer to the documentation for Stable Beluga 7b and LimaRP LLama2 7B.