ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b
ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b is a 7 billion parameter language model created by ChaoticNeutrals, merged using the SLERP method from Nitral-AI's Prima-LelantaclesV6.69-7b and V6.31-7b models. This experimental model achieves an average score of 73.03 on the Open LLM Leaderboard, demonstrating capabilities across reasoning, common sense, and language understanding tasks. It is suitable for general-purpose applications requiring a balanced performance profile.
Loading preview...
Model Overview
ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b is a 7 billion parameter language model developed by ChaoticNeutrals. This model was created through a SLERP merge of two base models: Nitral-AI/Prima-LelantaclesV6.69-7b and Nitral-AI/Prima-LelantaclesV6.31-7b.
Key Capabilities & Performance
The model's performance is evaluated on the Open LLM Leaderboard, where it achieved an average score of 73.03. Specific benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 70.65
- HellaSwag (10-Shot): 87.94
- MMLU (5-Shot): 64.67
- TruthfulQA (0-shot): 67.45
- Winogrande (5-shot): 84.69
- GSM8k (5-shot): 62.77
These scores indicate a balanced performance across various tasks, including reasoning, common sense, and general knowledge. The merge configuration utilized specific t values for self-attention and MLP layers, suggesting an intentional blend of the source models' characteristics.
Use Cases
This model is suitable for general language generation and understanding tasks where a 7B parameter model with solid all-around performance is required. Its balanced benchmark scores suggest applicability in areas such as:
- Text generation
- Question answering
- Reasoning tasks
- Common sense understanding