liminerity/Neurotic-Jomainotrik-7b-slerp

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 25, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

liminerity/Neurotic-Jomainotrik-7b-slerp is a 7 billion parameter language model created by liminerity, formed by merging liminerity/merge and bardsai/jaskier-7b-dpo-v5.6 using the slerp method. This model demonstrates strong general reasoning capabilities, achieving an average score of 76.40 on the Open LLM Leaderboard. It is particularly well-suited for tasks requiring robust understanding and generation, with notable performance in common sense reasoning and question answering.

Loading preview...

Overview

liminerity/Neurotic-Jomainotrik-7b-slerp is a 7 billion parameter language model developed by liminerity. It is a product of merging two distinct models, liminerity/merge and bardsai/jaskier-7b-dpo-v5.6, utilizing the slerp (spherical linear interpolation) merge method via mergekit. This merging technique aims to combine the strengths of its constituent models, resulting in a versatile and capable LLM.

Key Capabilities & Performance

This model has been evaluated on the Open LLM Leaderboard and demonstrates strong performance across various benchmarks, achieving an average score of 76.40. Specific benchmark results include:

  • AI2 Reasoning Challenge (25-Shot): 72.95
  • HellaSwag (10-Shot): 89.15
  • MMLU (5-Shot): 64.28
  • TruthfulQA (0-shot): 77.64
  • Winogrande (5-shot): 85.40
  • GSM8k (5-shot): 68.99

These scores indicate its proficiency in tasks ranging from common sense reasoning and factual recall to mathematical problem-solving.

When to Use This Model

Given its balanced performance across multiple reasoning and language understanding benchmarks, Neurotic-Jomainotrik-7b-slerp is a strong candidate for general-purpose applications where a 7B parameter model is suitable. Its robust scores on tasks like HellaSwag and Winogrande suggest it excels in scenarios requiring strong common sense and contextual understanding.