zarakiquemparte/zaraxe-l2-7b
The zarakiquemparte/zaraxe-l2-7b is a 7 billion parameter language model created by zarakiquemparte, built upon a merge of Zarafusionex L2 7b (without LimaRP), Airoboros L2 7B GPT4 2.0, and LimaRP LLama2 7B Lora. This model integrates diverse instruction-following capabilities from its base models, supporting both Alpaca 2 and LimaRP instruction formats. It is designed for general-purpose text generation and conversational AI, leveraging the strengths of its merged components.
Loading preview...
ZaraXE L2 7b: A Merged Language Model
The ZaraXE L2 7b is a 7 billion parameter language model developed by zarakiquemparte, created through a strategic merge of several existing models. Its foundation is primarily Zarafusionex L2 7b (without LimaRP), contributing 71% of its base, combined with Airoboros L2 7B GPT4 2.0 (29%). This merged base was then further integrated with LimaRP LLama2 7B Lora.
Key Characteristics & Capabilities
This model's unique merging approach allows it to inherit and support multiple instruction formats, making it versatile for various conversational and text generation tasks. Users can interact with the model using:
- Alpaca 2 instruction format:
### Instruction:\n<prompt>\n\n### Response:\n - LimaRP instruction format:
<<SYSTEM>>\n<character card and system prompt>\n\n<<USER>>\n<prompt>\n\n<<AIBOT>>\n
Training & Reproducibility
The ZaraXE L2 7b is a merged model, and its creation process is fully transparent and reproducible. The merging operations were performed using custom scripts provided by zarakiquemparte, specifically a merge script for the base models and an apply Lora script for integrating LimaRP. Detailed information about its constituent models can be found via the links provided in the original model card.
Limitations
It is important to note that this model is not intended for providing factual information or advice in any form, consistent with the general limitations of merged language models.