akjindal53244/Mistral-7B-v0.1-Open-Platypus
akjindal53244/Mistral-7B-v0.1-Open-Platypus is a 7 billion parameter language model based on the Mistral-7B-v0.1 architecture, instruction-finetuned using the Open-Platypus dataset. This model is optimized for general instruction following and demonstrates competitive performance across various benchmarks, including MMLU and HellaSwag. It is suitable for tasks requiring robust language understanding and generation capabilities.
Loading preview...
Model Overview
akjindal53244/Mistral-7B-v0.1-Open-Platypus is a 7 billion parameter language model built upon the Mistral-7B-v0.1 architecture. It has been instruction-finetuned using the comprehensive Open-Platypus dataset, enhancing its ability to follow instructions and perform a wide range of natural language tasks.
Key Capabilities & Performance
This model demonstrates strong performance across several benchmarks, as evaluated on the Hugging Face Open LLM Leaderboard. Its average score is 53.64, with notable results including:
- ARC (25-shot): 62.37
- HellaSwag (10-shot): 85.08
- MMLU (5-shot): 63.79
- Winogrande (5-shot): 77.66
While excelling in general reasoning and common sense, its performance on mathematical reasoning (GSM8K: 17.29) and reading comprehension (DROP: 21.93) indicates areas for potential improvement or specific use case considerations. The model supports a context length of 8192 tokens.
Good For
- General instruction-following tasks
- Applications requiring strong language understanding and generation
- Scenarios where a 7B parameter model with competitive benchmark performance is desired
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.