Everyone-LLM-7b-Base Overview
Everyone-LLM-7b-Base is a 7 billion parameter language model developed by rombodawg, designed to be a versatile and knowledgeable LLM for the community. It was created by merging 14 distinct community-developed Mistral-7B based models using a task arithmetic merge method. This approach integrates the strengths and varied abilities of models like dolphin-2.6-mistral-7b-dpo, bagel-dpo-7b-v0.4, Hercules-2.0-Mistral-7B, Mistral-7B-OpenOrca, OpenHermes-2.5-Mistral-7B, Nous-Capybara-7B-V1.9, neural-chat-7b-v3-3, Mistral-7B-Instruct-v0.2, WestLake-7B-v2, sqlcoder-7b, MetaMath-Mistral-7B, apollo-v1-7b, WizardMath-7B-V1.1, and openchat-3.5-0106.
Key Capabilities & Performance
- Broad Skill Integration: By combining multiple specialized models, Everyone-LLM-7b-Base aims to offer a wide array of capabilities, from general instruction following to more specific tasks like coding and mathematical reasoning, inherited from its diverse base models.
- Open LLM Leaderboard Performance: The model achieves a competitive average score of 70.21 on the Open LLM Leaderboard, with notable scores including:
- ARC (25-Shot): 66.38
- HellaSwag (10-Shot): 86.02
- MMLU (5-Shot): 64.94
- TruthfulQA (0-shot): 57.89
- Winogrande (5-shot): 80.43
- GSM8k (5-shot): 65.58
- Alpaca Prompt Template: The model utilizes the Alpaca prompt template for instruction following, making it compatible with a common and widely understood format.
Use Cases
Everyone-LLM-7b-Base is suitable for general-purpose applications where a broad range of knowledge and diverse abilities are beneficial. Its merged architecture makes it a strong candidate for tasks requiring a combination of reasoning, common sense, and instruction adherence, without being overly specialized in a single domain.