core-3/kuno-dogpark-7b
core-3/kuno-dogpark-7b is a 7 billion parameter language model created by core-3, formed by merging SanjiWatsuki/Kunoichi-DPO-v2-7B and mlabonne/Monarch-7B using a slerp merge method. This model demonstrates strong general reasoning capabilities, achieving an average score of 74.82 on the Open LLM Leaderboard across various benchmarks. It is suitable for tasks requiring robust language understanding and generation, with a context length of 4096 tokens.
Loading preview...
Model Overview
kuno-dogpark-7b is a 7 billion parameter language model developed by core-3. It is a product of merging two distinct models, SanjiWatsuki/Kunoichi-DPO-v2-7B and mlabonne/Monarch-7B, utilizing a slerp merge method via LazyMergekit. This merging strategy combines the strengths of its constituent models to enhance overall performance.
Performance Highlights
The model's capabilities have been evaluated on the Open LLM Leaderboard, where it achieved an average score of 74.82. Key benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 71.84
- HellaSwag (10-Shot): 88.15
- MMLU (5-Shot): 65.07
- TruthfulQA (0-shot): 71.14
- Winogrande (5-shot): 82.24
- GSM8k (5-shot): 70.51
These scores indicate strong performance across a range of reasoning, common sense, and language understanding tasks. The model is well-suited for general-purpose language generation and understanding applications.