Overview
Azazelle/SlimMelodicMaid is a 7 billion parameter language model developed by Azazelle. It is a product of a slerp merge combining several base models: Silicon-Maid-7B, piano-medley-7b, xDAN-L1-Chat-RL-v1, and mistral-7b-slimorcaboros. This merging technique allows for the integration of diverse strengths from its constituent models, aiming for a well-rounded performance profile.
Key Capabilities & Performance
The model demonstrates solid performance across a range of benchmarks, as evaluated on the Open LLM Leaderboard. Its average score is 69.70, indicating a strong general capability. Specific benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 67.15
- HellaSwag (10-Shot): 86.01
- MMLU (5-Shot): 64.75
- TruthfulQA (0-shot): 60.88
- Winogrande (5-shot): 78.61
- GSM8k (5-shot): 60.80
These scores highlight its proficiency in common sense reasoning, factual knowledge, and mathematical problem-solving, making it suitable for tasks requiring a broad understanding of language.
Merge Details
The slerp merge was performed using a specific .yaml configuration, detailing the layer ranges and t values for different tensor filters (self_attn, mlp) to precisely control the contribution of each source model. The base model for this merge was mistralai/Mistral-7B-v0.1.
Good for
- General-purpose text generation and understanding
- Applications requiring balanced performance in reasoning and common sense
- Exploration of merged model architectures