nlpguy/Westgate
nlpguy/Westgate is a 7 billion parameter language model created by nlpguy using the SLERP merge method, combining jsfs11/TurdusTrixBeagle-DARETIES-7B and senseable/garten2-7b. This merged model demonstrates strong general reasoning capabilities, achieving an average score of 73.84 on the Open LLM Leaderboard across various benchmarks. It is suitable for tasks requiring robust performance in areas like common sense reasoning, language understanding, and mathematical problem-solving.
Loading preview...
nlpguy/Westgate: A Merged 7B Language Model
nlpguy/Westgate is a 7 billion parameter language model developed by nlpguy through the strategic merging of two pre-trained models: jsfs11/TurdusTrixBeagle-DARETIES-7B and senseable/garten2-7b. This model was created using the SLERP merge method, a technique designed to combine the strengths of its constituent models.
Key Capabilities & Performance
Evaluated on the Open LLM Leaderboard, Westgate demonstrates strong performance across a range of benchmarks, achieving an average score of 73.84. Notable results include:
- AI2 Reasoning Challenge (25-Shot): 71.42
- HellaSwag (10-Shot): 88.14
- MMLU (5-Shot): 65.11
- TruthfulQA (0-shot): 62.59
- Winogrande (5-shot): 85.71
- GSM8k (5-shot): 70.05
These scores indicate proficiency in common sense reasoning, language understanding, and mathematical problem-solving. The model's 4096-token context length supports processing moderately sized inputs.
Use Cases
Given its balanced performance across various reasoning and language understanding tasks, nlpguy/Westgate is well-suited for applications requiring a general-purpose 7B model, particularly where a blend of capabilities from its merged components is beneficial. It can be applied to tasks such as:
- General text generation and completion
- Question answering
- Reasoning-based tasks
- Summarization