Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 8, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp is a 7 billion parameter language model created by Weyaxi, developed by merging teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-2 using the slerp method. This model demonstrates strong general performance across various benchmarks, achieving an average score of 70.2 on the Open LLM Leaderboard. It is suitable for general conversational AI applications and tasks requiring robust language understanding and generation.

Loading preview...

Model Overview

Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp is a 7 billion parameter language model developed by Weyaxi. This model was created using mergekit with the slerp (spherical linear interpolation) method, combining the strengths of teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-2. It is built upon the mistralai/Mistral-7B-v0.1 base model.

Key Capabilities & Performance

This model exhibits solid performance across a range of benchmarks, as evaluated on the Open LLM Leaderboard:

  • Average Score: 70.2
  • ARC (25-shot): 67.49
  • HellaSwag (10-shot): 85.42
  • MMLU (5-shot): 64.13
  • TruthfulQA (0-shot): 61.05
  • Winogrande (5-shot): 80.3
  • GSM8K (5-shot): 63.08

Prompting

The model supports various prompt templates, with ChatML being the recommended format for optimal interaction. Examples for both OpenHermes-2.5-Mistral-7B and neural-chat-7b-v3-2 templates are provided for flexibility.

Important Note

Users are advised that a newer version, Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp, is available and recommended for use.