Kukedlc/NeuralMarioMonarch-7B-slerp is a 7 billion parameter language model created by Kukedlc, formed by merging mlabonne/Monarch-7B and vanillaOVO/supermario_v4 using a slerp method. This merged model achieves an average score of 76.11 on the Open LLM Leaderboard, demonstrating strong performance across various reasoning and language understanding benchmarks. It is suitable for general language generation tasks where a balance of performance and model size is desired.
Loading preview...
Model Overview
Kukedlc/NeuralMarioMonarch-7B-slerp is a 7 billion parameter language model developed by Kukedlc. It was created by merging two distinct models, mlabonne/Monarch-7B and vanillaOVO/supermario_v4, utilizing a spherical linear interpolation (slerp) merge method. This approach combines the strengths of its base models to potentially enhance overall performance and capabilities.
Key Capabilities & Performance
This model has been evaluated on the Open LLM Leaderboard, achieving a notable average score of 76.11. Its performance highlights include:
- AI2 Reasoning Challenge (25-Shot): 73.81
- HellaSwag (10-Shot): 89.04
- MMLU (5-Shot): 64.61
- TruthfulQA (0-shot): 74.97
- Winogrande (5-shot): 85.00
- GSM8k (5-shot): 69.22
These scores indicate a robust ability in reasoning, common sense, language understanding, and mathematical problem-solving.
Good For
- General text generation and understanding tasks.
- Applications requiring a balance of performance and a 7B parameter footprint.
- Exploration of merged model architectures for diverse capabilities.