nlpguy/Lelantos-low-tune

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 6, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

nlpguy/Lelantos-low-tune is a 7 billion parameter language model created by nlpguy using the task arithmetic merge method. It is based on openaccess-ai-collective/DPOpenHermes-7B-v2 and incorporates SanjiWatsuki/Lelantos-7B. This model demonstrates strong general reasoning capabilities, achieving an average score of 70.82 on the Open LLM Leaderboard, making it suitable for a variety of general-purpose conversational and analytical tasks.

Loading preview...

Model Overview

nlpguy/Lelantos-low-tune is a 7 billion parameter language model developed by nlpguy. It was created using the task arithmetic merge method via mergekit, combining openaccess-ai-collective/DPOpenHermes-7B-v2 as its base with SanjiWatsuki/Lelantos-7B.

Key Capabilities & Performance

This model exhibits robust performance across a range of benchmarks, as evaluated on the Open LLM Leaderboard. It achieved an average score of 70.82, with notable results including:

  • AI2 Reasoning Challenge (25-Shot): 67.06
  • HellaSwag (10-Shot): 86.06
  • MMLU (5-Shot): 64.11
  • TruthfulQA (0-shot): 61.33
  • Winogrande (5-shot): 79.56
  • GSM8k (5-shot): 66.79

These scores indicate strong general reasoning, common sense, and language understanding abilities, making it a versatile choice for various applications.

Merge Details

The model's architecture leverages a specific configuration where layers from the base model and SanjiWatsuki/Lelantos-7B were merged with a weight of 0.5 across all 32 layers, building upon openaccess-ai-collective/DPOpenHermes-7B-v2.