DreadPoor/Satyr-7B-Model_Stock
DreadPoor/Satyr-7B-Model_Stock is a 7 billion parameter language model created by DreadPoor, formed by merging several existing 7B models using the Model Stock method. This model is designed for general language tasks, leveraging the combined strengths of its constituent models. It achieves an average score of 71.74 on the Open LLM Leaderboard, demonstrating solid performance across various benchmarks including reasoning, common sense, and factual recall.
Loading preview...
Satyr-7B-Model_Stock Overview
Satyr-7B-Model_Stock is a 7 billion parameter language model developed by DreadPoor. It was created using the Model Stock merging method, combining four distinct 7B models: NeverSleep/Noromaid-7B-0.4-DPO, SanjiWatsuki/Kunoichi-DPO-v2-7B, Undi95/Toppy-M-7B, and Epiculous/Fett-uccine-7B. This approach aims to synthesize the capabilities of multiple specialized models into a single, more versatile offering.
Key Capabilities & Performance
The model demonstrates competitive performance across a range of benchmarks, as evaluated on the Open LLM Leaderboard. It achieved an average score of 71.74, with notable results in:
- AI2 Reasoning Challenge (25-Shot): 68.60
- HellaSwag (10-Shot): 86.96
- MMLU (5-Shot): 65.02
- TruthfulQA (0-shot): 63.76
- Winogrande (5-shot): 80.43
- GSM8k (5-shot): 65.66
These scores indicate proficiency in reasoning, common sense understanding, multi-task language understanding, and mathematical problem-solving.
Intended Use Cases
Given its balanced performance across various benchmarks, Satyr-7B-Model_Stock is suitable for general-purpose language generation and understanding tasks. Its merged architecture suggests a broad applicability, making it a solid choice for applications requiring robust performance without a highly specialized focus.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.