ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16
Prima-LelantaclesV4-7b-16k-bf16 is a 7 billion parameter language model developed by ChaoticNeutrals, created through a slerp merge of Test157t/Yarncules-7b-128k and Test157t/Prima-LelantaclesV3-7b. This bfloat16 model is designed for general language tasks, achieving an average score of 68.28 on the Open LLM Leaderboard. It demonstrates capabilities across reasoning, common sense, and factual recall, making it suitable for a range of conversational and analytical applications.
Loading preview...
Model Overview
ChaoticNeutrals/Prima-LelantaclesV4-7b-16k-bf16 is a 7 billion parameter language model, developed by ChaoticNeutrals. This model is a product of a slerp merge, combining the strengths of two base models: Test157t/Yarncules-7b-128k and Test157t/Prima-LelantaclesV3-7b. The merge process utilized a specific configuration, blending layers from both source models with varying weights for self-attention and MLP components, and is provided in bfloat16 format.
Performance Highlights
Evaluated on the Open LLM Leaderboard, Prima-LelantaclesV4-7b-16k-bf16 achieved an average score of 68.28. Key benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 66.04
- HellaSwag (10-Shot): 85.07
- MMLU (5-Shot): 64.70
- TruthfulQA (0-shot): 54.76
- Winogrande (5-shot): 80.27
- GSM8k (5-shot): 58.83
These scores indicate a balanced performance across various reasoning, common sense, and knowledge-based tasks.
Use Cases
This model is suitable for applications requiring a general-purpose language model with solid performance in:
- Reasoning and Problem Solving: Demonstrated by its scores on AI2 Reasoning Challenge and GSM8k.
- Common Sense Understanding: Indicated by strong performance in HellaSwag and Winogrande.
- General Knowledge and Factual Recall: Supported by its MMLU score.
Its bfloat16 precision makes it efficient for deployment in environments that support this data type.