Model Overview
paulml/DPOB-INMTOB-7B is a 7 billion parameter language model developed by paulml. It is a merged model, combining the strengths of two base models: liminerity/Omningotex-7b-slerp and eren23/merged-dpo-binarized-NeutrixOmnibe-7B. The merge was performed using the slerp method via LazyMergekit, integrating layers from both source models.
Key Capabilities & Performance
This model exhibits robust performance across a range of tasks, as evidenced by its evaluation on the Open LLM Leaderboard. It achieved an average score of 76.21.
- Reasoning: Scored 73.21 on the AI2 Reasoning Challenge (25-Shot) and 69.22 on GSM8k (5-shot).
- Common Sense: Achieved 89.00 on HellaSwag (10-Shot) and 84.69 on Winogrande (5-shot).
- General Knowledge: Demonstrated 64.54 on MMLU (5-Shot) and 76.60 on TruthfulQA (0-shot).
Usage
The model supports a context length of 4096 tokens and can be easily integrated into Python applications using the Hugging Face transformers library for text generation tasks. Its configuration details highlight a specific parameter weighting strategy for self-attention and MLP layers during the merge process.
When to Use This Model
This model is suitable for general-purpose language understanding and generation tasks where a 7B parameter model with balanced reasoning and common-sense capabilities is desired. Its strong performance across multiple benchmarks suggests its utility in applications requiring robust conversational abilities, factual recall, and logical inference.