jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B is a 7 billion parameter language model created by jsfs11, built upon the Mistral-7B-v0.1 base model through a DARE TIES merge of Westlake-7B-v2, kellemar-DPO-Orca-Distilled-7B-SLERP, and NeuralMarcoro14-7B. This model demonstrates strong general reasoning capabilities, achieving an average score of 73.98 on the Open LLM Leaderboard, with notable performance in common sense reasoning and mathematical tasks. It is optimized for diverse natural language understanding and generation applications, offering a balanced performance across various benchmarks.
Loading preview...
Model Overview
jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B is a 7 billion parameter language model developed by jsfs11. It is constructed using the DARE TIES merge method on a Mistral-7B-v0.1 base, integrating three distinct models: senseable/Westlake-7B-v2, decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP, and mlabonne/NeuralMarcoro14-7B. This merging strategy aims to combine the strengths of its constituent models, resulting in a versatile and capable LLM.
Key Capabilities & Performance
This model exhibits strong performance across a range of benchmarks, as evaluated on the Hugging Face Open LLM Leaderboard. It achieves an average score of 73.98, indicating robust general reasoning and language understanding. Specific benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 71.93
- HellaSwag (10-Shot): 88.06
- MMLU (5-Shot): 64.99
- TruthfulQA (0-shot): 65.96
- Winogrande (5-shot): 82.79
- GSM8k (5-shot): 70.13
These scores highlight its proficiency in common sense reasoning, multi-task language understanding, and mathematical problem-solving.
When to Use This Model
This model is well-suited for applications requiring a balanced and capable 7B parameter model. Its strong performance across various reasoning and language understanding tasks makes it a good candidate for:
- General-purpose chatbots and conversational AI.
- Text generation and summarization.
- Reasoning-intensive tasks, including question answering and logical inference.
- Educational tools and content creation where factual accuracy and coherent responses are important.