theNovaAI/Supernova-experimental
Supernova-experimental is a 13 billion parameter language model developed by theNovaAI, featuring a 4096-token context length. This experimental model is a merge of PygmalionAI/pygmalion-2-13b and Undi95/Amethyst-13B, specifically designed for conversational tasks and role-playing scenarios. It utilizes the Alpaca prompt template and demonstrates an average score of 59.79 on the Open LLM Leaderboard, with notable performance in HellaSwag and Winogrande benchmarks.
Loading preview...
Supernova-experimental Overview
Supernova-experimental is a 13 billion parameter language model developed by theNovaAI, created as an experimental project for NovaAI's development. This model is a merge of two distinct base models: PygmalionAI/pygmalion-2-13b and Undi95/Amethyst-13B, combining their strengths to offer a unique conversational experience.
Key Capabilities
- Chatting: Optimized for general conversational interactions.
- Role-playing (RP): Demonstrates proficiency in engaging in role-play scenarios.
- Alpaca Prompt Template: Utilizes the familiar Alpaca instruction format for consistent interaction.
Performance Benchmarks
Evaluated on the Open LLM Leaderboard, Supernova-experimental achieved an average score of 59.79. Key individual metric scores include:
- HellaSwag (10-Shot): 83.66
- Winogrande (5-shot): 77.35
- AI2 Reasoning Challenge (25-Shot): 63.05
- MMLU (5-Shot): 56.59
- TruthfulQA (0-shot): 49.37
- GSM8k (5-shot): 28.73
Good For
- Developers exploring experimental models for conversational AI.
- Applications requiring a model capable of engaging in chat and role-playing.
- Use cases where a 13B parameter model with a 4096-token context is suitable for interactive text generation.