monology/openinstruct-mistral-7b
The monology/openinstruct-mistral-7b is a 7 billion parameter instruction-tuned language model developed by monology, based on Mistral-7B-v0.1. It is fine-tuned on the VMware/open-instruct dataset and achieved the top ranking among commercially-usable 7B models on the Open LLM Leaderboard as of November 2023. This model excels in general reasoning and language understanding tasks, making it suitable for a wide range of instruction-following applications.
Loading preview...
OpenInstruct Mistral-7B Overview
This model, developed by monology, is an instruction-tuned variant of the mistralai/Mistral-7B-v0.1 base model. It has been fine-tuned using the VMware/open-instruct dataset, making it highly capable of following instructions across various tasks.
Key Capabilities & Performance
As of November 21, 2023, monology/openinstruct-mistral-7b was ranked 1st among commercially-usable 7B models on the Open LLM Leaderboard. This distinction highlights its strong performance relative to other models of similar size that can be used in commercial applications (defined by an open-source base model and a non-synthetic open-source finetune dataset).
Key evaluation results from the Open LLM Leaderboard include:
- Average Score: 63.64
- AI2 Reasoning Challenge (25-Shot): 59.73
- HellaSwag (10-Shot): 82.77
- MMLU (5-Shot): 60.55
- TruthfulQA (0-shot): 48.76
- Winogrande (5-shot): 79.56
- GSM8k (5-shot): 50.49
Usage and Licensing
The model utilizes the Alpaca prompt format for instructions. Recommended inference parameters include a temperature of 0.2, top_k of 50, top_p of 0.95, and a repetition_penalty of 1.1. It is released under the permissive Apache-2.0 license, allowing for broad commercial use.