Gille/StrangeMerges_49-7B-dare_ties
Gille/StrangeMerges_49-7B-dare_ties is a 7 billion parameter language model created by Gille, formed by merging three other 7B models using the dare_ties method. This model demonstrates strong general language understanding and reasoning capabilities, achieving an average score of 75.50 on the Open LLM Leaderboard across various benchmarks. It is suitable for a range of generative AI tasks requiring robust performance in areas like common sense reasoning, factual recall, and question answering.
Loading preview...
Model Overview
Gille/StrangeMerges_49-7B-dare_ties is a 7 billion parameter language model developed by Gille. It is a product of merging three distinct 7B models: Gille/StrangeMerges_32-7B-slerp, AurelPx/Percival_01-7b-slerp, and louisbrulenaudet/Pearl-7B-slerp. This merge was performed using the dare_ties method via LazyMergekit, indicating an advanced approach to combining model strengths.
Key Capabilities & Performance
This model exhibits strong performance across a variety of benchmarks, as evaluated on the Open LLM Leaderboard. It achieves an average score of 75.50, with notable results in:
- AI2 Reasoning Challenge (25-Shot): 72.35
- HellaSwag (10-Shot): 88.30
- MMLU (5-Shot): 64.31
- TruthfulQA (0-shot): 74.70
- Winogrande (5-shot): 83.74
- GSM8k (5-shot): 69.60
These scores suggest a well-rounded model capable of handling tasks requiring common sense reasoning, factual knowledge, and logical problem-solving.
Good For
- General-purpose text generation and understanding.
- Applications requiring robust performance in multiple-choice reasoning and question answering.
- Use cases where a 7B parameter model with strong benchmark performance is desired.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.