Kukedlc/Fasciculus-Arcuatus-7B-slerp

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 29, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

Kukedlc/Fasciculus-Arcuatus-7B-slerp is a 7 billion parameter model created by Kukedlc through a slerp merge of macadeliccc/MonarchLake-7B and Kukedlc/NeoCortex-7B-slerp. This model demonstrates strong general reasoning capabilities across various benchmarks, including AI2 Reasoning Challenge, HellaSwag, and MMLU. With a 4096 token context length, it is suitable for tasks requiring robust language understanding and generation.

Loading preview...

Model Overview

Kukedlc/Fasciculus-Arcuatus-7B-slerp is a 7 billion parameter language model developed by Kukedlc. It is a product of a spherical linear interpolation (slerp) merge, combining the strengths of macadeliccc/MonarchLake-7B and Kukedlc/NeoCortex-7B-slerp using LazyMergekit. This merging technique aims to create a model with enhanced capabilities by blending the characteristics of its constituent models.

Key Capabilities & Performance

This model exhibits strong performance across a range of general language understanding and reasoning benchmarks, as evaluated on the Open LLM Leaderboard. Key results include:

  • Avg. Score: 76.07
  • AI2 Reasoning Challenge (25-Shot): 73.55
  • HellaSwag (10-Shot): 88.95
  • MMLU (5-Shot): 64.65
  • TruthfulQA (0-shot): 72.53
  • Winogrande (5-shot): 85.71
  • GSM8k (5-shot): 71.04

These scores indicate a balanced performance across tasks requiring common sense reasoning, factual recall, and mathematical problem-solving.

When to Use This Model

Given its balanced benchmark performance, Fasciculus-Arcuatus-7B-slerp is well-suited for general-purpose applications where a 7B parameter model with solid reasoning and language generation capabilities is required. Its 4096 token context length supports moderate input and output sizes, making it versatile for various text-based tasks.