liminerity/Blur-7b-v1.22
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 18, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

liminerity/Blur-7b-v1.22 is a 7 billion parameter language model created by liminerity, formed by merging s3nh/Sonya-Panda-7B-slerp, argilla/distilabeled-Marcoro14-7B-slerp, and Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp using the TIES merging method. This model demonstrates a balanced performance across various benchmarks, achieving an average score of 63.35 on the Open LLM Leaderboard, with notable scores in HellaSwag and TruthfulQA. It is designed for general-purpose language tasks, leveraging its merged architecture for broad applicability within its 4096-token context window.

Loading preview...

Blur-7b-v1.22: A Merged 7B Language Model

Blur-7b-v1.22 is a 7 billion parameter language model developed by liminerity, created through a strategic merge of three distinct models: s3nh/Sonya-Panda-7B-slerp, argilla/distilabeled-Marcoro14-7B-slerp, and Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-v3-3-Slerp. This merge was executed using the TIES method via LazyMergekit, building upon the liminerity/Blur-7b-v1.21 base model.

Key Capabilities & Performance

This model exhibits a well-rounded performance profile, as indicated by its evaluation on the Open LLM Leaderboard. It achieved an average score of 63.35, demonstrating proficiency across several key metrics:

  • HellaSwag (10-Shot): 82.00
  • TruthfulQA (0-shot): 68.01
  • AI2 Reasoning Challenge (25-Shot): 62.29
  • MMLU (5-Shot): 58.03
  • Winogrande (5-shot): 78.61
  • GSM8k (5-shot): 31.16

These scores suggest a model capable of handling a variety of tasks, from common sense reasoning and factual recall to language understanding and basic mathematical problem-solving, within its 4096-token context window.

Good For

  • General-purpose text generation and understanding: Its balanced benchmark performance makes it suitable for a wide array of language tasks.
  • Applications requiring a blend of reasoning and factual knowledge: The merge incorporates models with strengths in different areas, contributing to its versatility.
  • Developers seeking a 7B model with a unique merged architecture: Offers an alternative to single-base models, potentially providing a distinct blend of capabilities.