liminerity/binarized-ingotrix-slerp-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 12, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

liminerity/binarized-ingotrix-slerp-7b is a 7 billion parameter language model created by liminerity, formed by merging eren23/dpo-binarized-NeuralTrix-7B and liminerity/Ingot-7b-slerp-7-forged using a slerp merge method. This model demonstrates strong general reasoning capabilities, achieving an average score of 76.04 on the Open LLM Leaderboard. It is suitable for tasks requiring robust performance across various benchmarks, including common sense reasoning and language understanding.

Loading preview...

Model Overview

liminerity/binarized-ingotrix-slerp-7b is a 7 billion parameter language model developed by liminerity. It is a product of merging two distinct models: eren23/dpo-binarized-NeuralTrix-7B and liminerity/Ingot-7b-slerp-7-forged, utilizing a slerp (spherical linear interpolation) merge method via LazyMergekit. This merging strategy combines the strengths of its constituent models to enhance overall performance.

Key Capabilities & Performance

The model exhibits strong general-purpose language understanding and reasoning, as evidenced by its evaluation on the Open LLM Leaderboard. Key performance metrics include:

  • Average Score: 76.04
  • AI2 Reasoning Challenge (25-Shot): 73.21
  • HellaSwag (10-Shot): 88.64
  • MMLU (5-Shot): 64.85
  • TruthfulQA (0-shot): 75.57
  • Winogrande (5-shot): 82.87
  • GSM8k (5-shot): 71.11

These scores indicate a balanced performance across various reasoning, common sense, and knowledge-based tasks.

Use Cases

This model is well-suited for applications requiring a capable 7B parameter model with solid performance across a range of benchmarks. Its balanced scores suggest suitability for tasks such as:

  • General text generation and completion
  • Question answering
  • Reasoning tasks
  • Common sense understanding