Weyaxi/Samantha-Nebula-7B
Samantha-Nebula-7B is a 7 billion parameter language model developed by Weyaxi, created by merging ehartford/samantha-mistral-7b and PulsarAI/Nebula-7B. This model demonstrates a balanced performance across various benchmarks, achieving an average score of 52.87 on the Open LLM Leaderboard. It is suitable for general-purpose language tasks, with notable scores in HellaSwag and Winogrande.
Loading preview...
Samantha-Nebula-7B Overview
Samantha-Nebula-7B is a 7 billion parameter language model developed by Weyaxi. It is a merge of two distinct models: ehartford/samantha-mistral-7b and PulsarAI/Nebula-7B. This merging approach aims to combine the strengths of its constituent models to offer a versatile language processing capability.
Key Capabilities & Performance
The model's performance has been evaluated on the Open LLM Leaderboard, showcasing a balanced aptitude across several benchmarks:
- Average Score: 52.87
- HellaSwag (10-shot): 82.25
- Winogrande (5-shot): 73.09
- ARC (25-shot): 57.0
- MMLU (5-shot): 54.21
- TruthfulQA (0-shot): 49.58
- DROP (3-shot): 42.57
- GSM8K (5-shot): 11.37
These scores indicate its proficiency in common sense reasoning, reading comprehension, and general knowledge tasks, while showing areas for improvement in complex mathematical reasoning (GSM8K).
Good For
- General text generation and understanding tasks.
- Applications requiring strong common sense and reading comprehension.
- Use cases where a 7B parameter model with balanced performance is desired.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.