zarakiquemparte/zarablend-mx-l2-7b
The zarablend-mx-l2-7b model by zarakiquemparte is a 7 billion parameter language model with a 4096-token context length, created by merging Nous Hermes Llama2 7b (53%) and Airoboros L2 7B GPT4 m2.0 (47%), then further merging with LimaRP LLama2 7B Lora. This merged model is designed to support both Alpaca 2 and LimaRP instruction formats, making it versatile for various conversational and instruction-following tasks. It is particularly suited for applications requiring a blend of capabilities from its constituent models.
Loading preview...
Model Overview
The zarablend-mx-l2-7b is a 7 billion parameter language model developed by zarakiquemparte, built upon a unique merging strategy. It combines the strengths of Nous Hermes Llama2 7b (53%) and Airoboros L2 7B GPT4 m2.0 (47%), with the resulting merge further integrated with LimaRP LLama2 7B Lora. This intricate merging process was executed using custom scripts, ensuring a tailored blend of characteristics from its base models.
Key Capabilities
- Hybrid Instruction Support: The model is designed to be compatible with multiple instruction formats, including:
- Alpaca 2 format
- LimaRP instruction format (supporting system prompts and character cards)
- Merged Architecture: Leverages the combined knowledge and capabilities of its constituent models, potentially offering a broader range of responses and understanding.
Usage Considerations
- Instruction Formats: Users should adhere to either the Alpaca 2 or LimaRP instruction formats for optimal performance.
- Limitations: This model is explicitly not intended for supplying factual information or advice in any form, indicating a focus on conversational or creative generation rather than factual accuracy.
- Reproducibility: The merging process is transparent and can be reproduced using provided scripts, allowing for further experimentation or understanding of its construction.