Weyaxi/EnsembleV5-Nova-13B
EnsembleV5-Nova-13B is a 13 billion parameter language model created by Weyaxi, formed by merging yontaek/llama-2-13B-ensemble-v5 and PulsarAI/Nova-13B-Lora. This model demonstrates a strong average performance of 49.65 on the Open LLM Leaderboard, with notable scores in ARC (62.71) and HellaSwag (82.55). It is suitable for general language understanding and generation tasks, particularly those requiring robust reasoning and common sense.
Loading preview...
EnsembleV5-Nova-13B Overview
EnsembleV5-Nova-13B is a 13 billion parameter language model developed by Weyaxi, resulting from a merge of two distinct models: yontaek/llama-2-13B-ensemble-v5 and PulsarAI/Nova-13B-Lora. This model aims to combine the strengths of its constituent parts to offer enhanced performance across various benchmarks.
Key Performance Metrics
The model's capabilities have been evaluated on the Open LLM Leaderboard, achieving an overall average score of 49.65. Specific benchmark results include:
- ARC (25-shot): 62.71
- HellaSwag (10-shot): 82.55
- MMLU (5-shot): 56.79
- TruthfulQA (0-shot): 49.86
- Winogrande (5-shot): 76.24
- GSM8K (5-shot): 10.77
- DROP (3-shot): 8.64
These scores indicate a balanced performance across reasoning, common sense, and language understanding tasks, with particular strength in ARC and HellaSwag. The model's context length is 4096 tokens.
Potential Use Cases
Given its benchmark performance, EnsembleV5-Nova-13B is well-suited for applications requiring:
- General text generation and comprehension: For tasks like summarization, question answering, and content creation.
- Reasoning tasks: Demonstrated by its ARC score, making it useful for logical inference and problem-solving.
- Common sense reasoning: Indicated by its HellaSwag and Winogrande scores, beneficial for human-like understanding.