ChaoticNeutrals/Prima-LelantaclesV6-7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 1, 2024License:otherArchitecture:Transformer0.0K Cold

The Prima-LelantaclesV6-7b is a 7 billion parameter language model created by ChaoticNeutrals, formed by merging Test157t/West-Pasta-Lake-7b and Test157t/Lelantacles6-Experiment26-7B using the DARE TIES method. This model demonstrates strong general reasoning capabilities, achieving an average score of 73.41 on the Open LLM Leaderboard, with notable performance in areas like HellaSwag and Winogrande. It is suitable for a range of general-purpose language understanding and generation tasks.

Loading preview...

Prima-LelantaclesV6-7b: A Merged 7B Language Model

This model, developed by ChaoticNeutrals, is a 7 billion parameter language model created through a merge of two existing models: Test157t/West-Pasta-Lake-7b and Test157t/Lelantacles6-Experiment26-7B. The merge was performed using the DARE TIES method, a technique designed to combine the strengths of multiple models.

Key Capabilities & Performance

The Prima-LelantaclesV6-7b demonstrates solid performance across various benchmarks, as evaluated on the Open LLM Leaderboard. It achieved an average score of 73.41, indicating strong general language understanding and reasoning. Specific benchmark results include:

  • AI2 Reasoning Challenge (25-Shot): 71.50
  • HellaSwag (10-Shot): 87.65
  • MMLU (5-Shot): 64.64
  • TruthfulQA (0-shot): 64.29
  • Winogrande (5-shot): 84.85
  • GSM8k (5-shot): 67.55

These scores suggest the model is well-suited for tasks requiring common sense reasoning, question answering, and general knowledge.

Use Cases

Given its balanced performance across multiple benchmarks, Prima-LelantaclesV6-7b is a versatile model suitable for:

  • General text generation and completion
  • Reasoning tasks and logical inference
  • Question answering systems
  • Content creation requiring broad knowledge