gate369/Blurdus-7b-v0.1

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Blurdus-7b-v0.1 is a 7 billion parameter language model developed by gate369, created through a merge of Blurred-Beagle-7b-slerp, BrurryDog-7b-v0.1, and Blur-7b-v1.21 using the ties merging method. This model demonstrates strong general reasoning capabilities, achieving an average score of 74.35 on the Open LLM Leaderboard across various benchmarks. With a 4096-token context length, it is suitable for a range of general-purpose language generation and understanding tasks.

Loading preview...

Blurdus-7b-v0.1: A Merged 7B Language Model

Blurdus-7b-v0.1 is a 7 billion parameter language model developed by gate369, constructed by merging three distinct models: Blurred-Beagle-7b-slerp, BrurryDog-7b-v0.1, and Blur-7b-v1.21. This merge was performed using the ties method via LazyMergekit, leveraging a base model of udkai/Turdus.

Key Capabilities & Performance

This model exhibits robust performance across several reasoning and language understanding benchmarks, as evaluated on the Open LLM Leaderboard. It achieves an average score of 74.35, with notable results including:

  • AI2 Reasoning Challenge (25-Shot): 72.27
  • HellaSwag (10-Shot): 88.50
  • MMLU (5-Shot): 64.82
  • TruthfulQA (0-shot): 69.72
  • Winogrande (5-shot): 82.95
  • GSM8k (5-shot): 67.85

Good for

  • General-purpose text generation: Its balanced performance across various benchmarks suggests suitability for a wide array of language tasks.
  • Reasoning tasks: Strong scores on AI2 Reasoning Challenge and GSM8k indicate good logical and mathematical reasoning abilities.
  • Applications requiring a 7B parameter model: Offers a competitive option within its size class for developers seeking efficient deployment.