AbacusResearch/jaLLAbi2-7b
AbacusResearch/jaLLAbi2-7b is a 7 billion parameter language model created by AbacusResearch, formed by merging several existing 7B models using mergekit. This model achieves an average score of 75.06 on the Open LLM Leaderboard, demonstrating strong general reasoning and language understanding capabilities across various benchmarks. It is suitable for tasks requiring robust performance in areas like common sense reasoning, question answering, and mathematical problem-solving.
Loading preview...
jaLLAbi2-7b: A Merged 7B Language Model
jaLLAbi2-7b is a 7 billion parameter language model developed by AbacusResearch. It was created using mergekit by combining four distinct 7B models: FelixChao/WestSeverus-7B-DPO-v2, bardsai/jaskier-7b-dpo-v5.6, AbacusResearch/haLLAwa3, and cognitivecomputations/WestLake-7B-v2-laser. This merging strategy, specifically using the dare_ties method, aims to leverage the strengths of its constituent models.
Performance Highlights
Evaluated on the Open LLM Leaderboard, jaLLAbi2-7b demonstrates strong general performance with an average score of 75.06. Key benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 71.67
- HellaSwag (10-Shot): 88.29
- MMLU (5-Shot): 64.92
- TruthfulQA (0-shot): 70.16
- Winogrande (5-shot): 83.35
- GSM8k (5-shot): 71.95
These scores indicate its proficiency in diverse tasks ranging from common sense reasoning and multiple-choice question answering to mathematical problem-solving.
Good for
- General-purpose language understanding and generation tasks.
- Applications requiring robust reasoning and common sense.
- Scenarios where a 7B parameter model with strong benchmark performance is desired.