macadeliccc/Monarch-7B-SFT
Monarch-7B-SFT by macadeliccc is a 7 billion parameter language model. This model is instruction-tuned and demonstrates an average performance of 68.98 across various benchmarks, including AI2 Reasoning Challenge, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8k. It is suitable for general-purpose language understanding and generation tasks, particularly where strong reasoning and common-sense capabilities are beneficial.
Loading preview...
Monarch-7B-SFT: An Instruction-Tuned Language Model
Monarch-7B-SFT is an instruction-tuned language model developed by macadeliccc, featuring approximately 7 billion parameters. This model is designed for general language tasks, leveraging its instruction-following capabilities to perform a variety of natural language processing functions.
Key Capabilities
- Reasoning: Achieves 63.74 on the AI2 Reasoning Challenge (25-Shot).
- Common Sense: Scores 83.58 on HellaSwag (10-Shot) and 79.79 on Winogrande (5-Shot).
- General Knowledge: Demonstrates 64.11 on MMLU (5-Shot).
- Factuality: Records 54.25 on TruthfulQA (0-shot).
- Mathematical Reasoning: Performs at 68.39 on GSM8k (5-shot).
Performance Overview
The model's average performance across the Open LLM Leaderboard benchmarks is 68.98. This aggregate score reflects its balanced capabilities across different domains, making it a versatile option for various applications requiring robust language understanding and generation.
Good for
- Applications requiring strong general reasoning and common-sense understanding.
- Tasks benefiting from instruction-following capabilities.
- Use cases where a 7B parameter model offers a good balance of performance and computational efficiency.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.