Aryanne/Westest-7B
Aryanne/Westest-7B is a 7 billion parameter language model created by Aryanne, resulting from a merge of senseable/WestLake-7B-v2 and chargoddard/piano-medley-7b using the task_anysize merge method. This model demonstrates strong general language understanding and reasoning capabilities, achieving an average score of 74.03 on the Open LLM Leaderboard, including 72.18 on AI2 Reasoning Challenge and 64.43 on MMLU. It is suitable for a wide range of natural language processing tasks requiring robust performance from a 7B model.
Loading preview...
Model Overview
Aryanne/Westest-7B is a 7 billion parameter language model developed by Aryanne, created through a strategic merge of existing pre-trained models. This model leverages the strengths of senseable/WestLake-7B-v2 as its base, combined with chargoddard/piano-medley-7b, using the task_anysize merge method via mergekit.
Key Capabilities & Performance
This merged model exhibits strong general-purpose language understanding and reasoning, as evidenced by its performance on the Open LLM Leaderboard. It achieved an average score of 74.03, with notable results across various benchmarks:
- AI2 Reasoning Challenge (25-Shot): 72.18
- HellaSwag (10-Shot): 88.52
- MMLU (5-Shot): 64.43
- TruthfulQA (0-shot): 66.72
- Winogrande (5-shot): 86.58
- GSM8k (5-shot): 65.73
These scores indicate its proficiency in tasks requiring common sense reasoning, factual recall, and complex problem-solving.
Use Cases
Westest-7B is well-suited for applications requiring a capable 7B parameter model, including:
- General text generation and completion
- Question answering
- Summarization
- Reasoning tasks
- Educational applications
Its balanced performance across multiple benchmarks makes it a versatile choice for developers seeking a robust language model.