MSL7/INEX4-7b
MSL7/INEX4-7b is a 7 billion parameter language model developed by Liminerity, created through a series of slerp merges using MergeKit. This model integrates components from liminerity/Ingot-7b-slerp-7-forged and yam-peleg/Experiment26-7B, resulting in a model with a 4096 token context length. It demonstrates strong general reasoning capabilities, achieving an average score of 75.84 on the Open LLM Leaderboard, making it suitable for diverse natural language processing tasks.
Loading preview...
Overview
MSL7/INEX4-7b is a 7 billion parameter language model developed by Liminerity. It was constructed using a multi-stage slerp merging process via MergeKit, combining distinct components from liminerity/Ingot-7b-slerp-7-forged and yam-peleg/Experiment26-7B.
Key Capabilities & Performance
This model exhibits robust performance across various benchmarks, as evaluated on the Open LLM Leaderboard. It achieved an average score of 75.84, with notable results in:
- AI2 Reasoning Challenge (25-Shot): 72.95
- HellaSwag (10-Shot): 88.79
- MMLU (5-Shot): 64.70
- TruthfulQA (0-shot): 74.42
- Winogrande (5-shot): 83.90
- GSM8k (5-shot): 70.28
Use Cases
Given its balanced performance across reasoning, common sense, and language understanding tasks, INEX4-7b is well-suited for general-purpose applications requiring strong analytical and generative capabilities. Its 7B parameter size makes it efficient for deployment while maintaining competitive performance.