gate369/BrurryDog-7b-v0.1

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

BrurryDog-7b-v0.1 is a 7 billion parameter language model created by gate369, formed by merging uakai/Turdus, leveldevai/TurdusBeagle-7B, and liminerity/Blur-7b-v1.21 using the TIES merge method. It features a 4096-token context length and achieves an average score of 74.24 on the Open LLM Leaderboard, demonstrating strong general reasoning and language understanding capabilities. This model is suitable for a wide range of general-purpose natural language processing tasks.

Loading preview...

BrurryDog-7b-v0.1 Overview

BrurryDog-7b-v0.1 is a 7 billion parameter language model developed by gate369. It is a product of merging three distinct models: udkai/Turdus, leveldevai/TurdusBeagle-7B, and liminerity/Blur-7b-v1.21, utilizing the TIES merge method via LazyMergekit. This approach combines the strengths of its constituent models to enhance overall performance.

Key Capabilities

  • General Reasoning: Achieves 72.53 on the AI2 Reasoning Challenge (25-Shot).
  • Common Sense: Scores 88.37 on HellaSwag (10-Shot) and 82.87 on Winogrande (5-shot).
  • Knowledge & Understanding: Demonstrates 64.74 on MMLU (5-Shot) and 70.05 on TruthfulQA (0-shot).
  • Mathematical Reasoning: Attains 66.87 on GSM8k (5-Shot).

Performance Highlights

The model has been evaluated on the Open LLM Leaderboard, achieving an average score of 74.24. Detailed results are available here.

Good For

  • General-purpose natural language understanding and generation tasks.
  • Applications requiring robust reasoning and common-sense capabilities.
  • Scenarios where a merged model's combined strengths are beneficial.