Weyaxi/OpenOrca-Zephyr-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 11, 2023License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

Weyaxi/OpenOrca-Zephyr-7B is a 7 billion parameter language model created by Weyaxi, formed by merging HuggingFaceH4/zephyr-7b-alpha and Open-Orca/Mistral-7B-OpenOrca using a TIES merge. This model combines the strengths of both base models, offering a balanced performance profile for general language understanding and generation tasks. It is suitable for applications requiring a capable 7B model with a 4096-token context length.

Loading preview...

Model Overview

Weyaxi/OpenOrca-Zephyr-7B is a 7 billion parameter language model developed by Weyaxi. This model is a product of a TIES merge of two prominent base models:

  • HuggingFaceH4/zephyr-7b-alpha (contributing 0.5 weight and 0.5 density)
  • Open-Orca/Mistral-7B-OpenOrca (contributing 0.3 weight and 0.5 density)

This merging strategy aims to combine the distinct capabilities of its constituent models, leveraging their respective strengths in instruction following and general language understanding. The model operates with a context length of 4096 tokens.

Key Characteristics

  • Merged Architecture: Combines Zephyr-7B-alpha and Mistral-7B-OpenOrca via TIES merge for a balanced performance.
  • 7 Billion Parameters: Offers a strong balance between performance and computational efficiency.
  • 4096 Token Context: Supports processing moderately long inputs and generating coherent responses.

Use Cases

This model is well-suited for a variety of general-purpose natural language processing tasks, including:

  • Text generation
  • Question answering
  • Summarization
  • Instruction following

While specific benchmark results are not detailed in the provided README, its foundation on well-regarded base models suggests robust performance across common language model applications.