Model Overview
Azazelle/Sina-Odin-7b-Merge is an experimental 7 billion parameter language model developed by Azazelle. It is constructed using a DARE (DARE_TIES) merge method, combining several base models such as Mihaiii/Metis-0.3, rishiraj/smol-7b, SanjiWatsuki/openchat-3.5-1210-starling-slerp, and Azazelle/Dumb-Maidlet. This merging technique aims to leverage the strengths of its constituent models to create a versatile language model.
Key Capabilities & Performance
This model demonstrates general language understanding and reasoning capabilities, as evaluated on the Open LLM Leaderboard. Its performance metrics include:
- Avg. Score: 47.82
- AI2 Reasoning Challenge (25-Shot): 52.82
- HellaSwag (10-Shot): 68.86
- MMLU (5-Shot): 45.54
- TruthfulQA (0-shot): 39.20
- Winogrande (5-shot): 72.22
- GSM8k (5-shot): 8.26
When to Use This Model
Sina-Odin-7b-Merge is suitable for use cases requiring a 7B parameter model with a balanced performance across various general language tasks. Its experimental nature suggests it could be a good candidate for research into model merging techniques or for applications where a compact, merged model is preferred over a single, larger base model. Developers can explore its utility in tasks such as text generation, question answering, and common sense reasoning, keeping its benchmark scores in mind for specific performance expectations.