eren23/merged-dpo-binarized-NeutrixOmnibe-7B
The eren23/merged-dpo-binarized-NeutrixOmnibe-7B is a 7 billion parameter language model created by eren23, formed by merging eren23/dpo-binarized-NeutrixOmnibe-7B and Kukedlc/NeuTrixOmniBe-7B-model-remix using LazyMergekit. This model demonstrates strong general reasoning capabilities, achieving an average score of 76.20 on the Open LLM Leaderboard across various benchmarks. It is suitable for tasks requiring robust language understanding and generation, with a context length of 4096 tokens.
Loading preview...
Model Overview
The eren23/merged-dpo-binarized-NeutrixOmnibe-7B is a 7 billion parameter language model developed by eren23. This model was created through a merge operation using LazyMergekit, combining the strengths of eren23/dpo-binarized-NeutrixOmnibe-7B and Kukedlc/NeuTrixOmniBe-7B-model-remix.
Key Capabilities & Performance
This merged model exhibits strong performance across a range of benchmarks, as evaluated on the Open LLM Leaderboard. Its average score is 76.20, with notable results including:
- AI2 Reasoning Challenge (25-Shot): 72.70
- HellaSwag (10-Shot): 89.03
- MMLU (5-Shot): 64.59
- TruthfulQA (0-shot): 76.90
- Winogrande (5-shot): 85.08
- GSM8k (5-shot): 68.92
These scores indicate its proficiency in reasoning, common-sense understanding, and general knowledge tasks. The model supports a context length of 4096 tokens.
Use Cases
Given its balanced performance across various benchmarks, this model is well-suited for general-purpose language generation and understanding tasks where a 7B parameter model is appropriate. It can be applied to applications requiring robust reasoning and factual recall.