Weyaxi/SlimOpenOrca-Mistral-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 11, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Weyaxi/SlimOpenOrca-Mistral-7B is a 7 billion parameter language model created by Weyaxi, formed by merging Open-Orca's Mistral-7B-SlimOrca and Mistral-7B-OpenOrca using the ties merge method. This model combines the strengths of its base components, offering a balanced performance for general-purpose conversational and instruction-following tasks. It is designed to provide robust language generation capabilities within a 4096-token context window.

Loading preview...

Model Overview

Weyaxi/SlimOpenOrca-Mistral-7B is a 7 billion parameter language model developed by Weyaxi. This model is a merge of two base models from Open-Orca: Mistral-7B-SlimOrca and Mistral-7B-OpenOrca. The merging process utilized the ties merge method, with specific weights applied to each component model.

Merging Details

  • Weights:
    • Open-Orca/Mistral-7B-SlimOrca: 0.5
    • Open-Orca/Mistral-7B-OpenOrca: 0.3
  • Density:
    • Open-Orca/Mistral-7B-SlimOrca: 0.5
    • Open-Orca/Mistral-7B-OpenOrca: 0.5

This configuration aims to leverage the distinct characteristics of both SlimOrca and OpenOrca, which are known for their instruction-following and reasoning capabilities, respectively. The model operates with a context length of 4096 tokens.

Quantized Versions

For optimized deployment and reduced resource consumption, several quantized versions of SlimOpenOrca-Mistral-7B are available, thanks to TheBloke. These include:

These quantized versions allow for efficient inference on various hardware setups.