Weyaxi/Dolphin2.1-OpenOrca-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 11, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Weyaxi/Dolphin2.1-OpenOrca-7B is a 7 billion parameter instruction-tuned language model, created by Weyaxi, resulting from a TIES merge of ehartford/dolphin-2.1-mistral-7b and Open-Orca/Mistral-7B-OpenOrca. This model leverages the strengths of both base models, offering a balanced performance across various reasoning and language understanding tasks. With a 4096-token context length, it is suitable for general-purpose conversational AI and instruction-following applications.

Loading preview...