saishf/West-Maid-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 3, 2024License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

saishf/West-Maid-7B is a 7 billion parameter language model created by saishf through a SLERP merge of senseable/WestLake-7B-v2 and NeverSleep/Noromaid-7B-0.4-DPO. This model leverages a 4096-token context length and achieves an average score of 69.09 on the Open LLM Leaderboard, demonstrating capabilities across reasoning, common sense, and language understanding tasks. It is designed for general-purpose applications requiring a balanced performance profile from its merged base models.

Loading preview...

Model Overview

saishf/West-Maid-7B is a 7 billion parameter language model developed by saishf. It was created using the SLERP merge method from MergeKit, combining the strengths of two distinct base models: senseable/WestLake-7B-v2 and NeverSleep/Noromaid-7B-0.4-DPO.

Key Capabilities & Performance

This merged model demonstrates a balanced performance across various benchmarks, as evaluated on the Open LLM Leaderboard. It achieved an average score of 69.09, with notable results including:

  • AI2 Reasoning Challenge (25-Shot): 67.24
  • HellaSwag (10-Shot): 86.44
  • MMLU (5-Shot): 64.85
  • TruthfulQA (0-shot): 51.00
  • Winogrande (5-shot): 82.72
  • GSM8k (5-shot): 62.32

These scores indicate proficiency in reasoning, common sense, factual recall, and mathematical problem-solving. The model's 4096-token context length supports processing moderately long inputs.

When to Use This Model

West-Maid-7B is suitable for applications requiring a versatile 7B parameter model that benefits from the combined characteristics of its constituent models. Its balanced performance makes it a strong candidate for:

  • General text generation and understanding tasks.
  • Reasoning and question-answering scenarios.
  • Applications where a blend of capabilities from WestLake-7B-v2 and Noromaid-7B-0.4-DPO is desired.