jsfs11/WestOrcaNeural-V2-DARETIES-7B is a 7 billion parameter language model created by jsfs11, built using the DARE TIES merge method on a Mistral-7B-v0.1 base. This model integrates decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP, senseable/WestLake-7B-v2, and mlabonne/NeuralBeagle14-7B. It achieves an average score of 74.53 on the Open LLM Leaderboard, demonstrating strong general reasoning and language understanding across various benchmarks.
Loading preview...
WestOrcaNeural-V2-DARETIES-7B Overview
WestOrcaNeural-V2-DARETIES-7B is a 7 billion parameter language model developed by jsfs11. It is constructed using the DARE TIES merge method from MergeKit, combining several specialized models on a mistralai/Mistral-7B-v0.1 base.
Key Components and Merge Strategy
This model integrates the strengths of three distinct 7B models:
decruz07/kellemar-DPO-Orca-Distilled-7B-SLERPsenseable/WestLake-7B-v2mlabonne/NeuralBeagle14-7B
The DARE TIES method, with specific density and weight parameters for each component, aims to leverage their individual capabilities for improved overall performance.
Performance Highlights
Evaluated on the Open LLM Leaderboard, WestOrcaNeural-V2-DARETIES-7B demonstrates competitive performance for its size:
- Average Score: 74.53
- AI2 Reasoning Challenge (25-Shot): 72.10
- HellaSwag (10-Shot): 88.21
- MMLU (5-Shot): 64.64
- TruthfulQA (0-shot): 67.81
- Winogrande (5-shot): 83.74
- GSM8k (5-shot): 70.66
These scores indicate a balanced capability across reasoning, common sense, and language understanding tasks. The model's 4096 token context length supports moderate input and output sequences.