liminerity/Omningotex-7b-slerp is a 7 billion parameter language model developed by liminerity, created through a multi-stage merging process using gradient slerp. It achieves an impressive 76.33% accuracy on the Open LLM Leaderboard average, making it a highly accurate 7B LLM. This model is optimized for general reasoning and understanding tasks, demonstrating strong performance across various benchmarks including MMLU, HellaSwag, and GSM8k.
Loading preview...
Omningotex-7b-slerp: A Highly Accurate 7B Merged LLM
Omningotex-7b-slerp is a 7 billion parameter language model developed by liminerity, notable for its high accuracy achieved through an iterative merging process. The model was constructed using gradient slerp, combining several predecessor models, including ingot-7b-slerp (itself a merge of blurred-beagle-7b-slerp and Macaroni-7b-Tied) with dpo-binarized-NeuralTrix-7B and dpo-binarized-NeutrixOmnibe-7B from eren23.
Key Capabilities & Performance
- Exceptional Accuracy: Achieves an average score of 76.33% on the Open LLM Leaderboard, positioning it as a top-performing 7B model in terms of accuracy.
- Strong Reasoning: Demonstrates solid performance on reasoning tasks, with 73.29% on AI2 Reasoning Challenge and 70.51% on GSM8k.
- General Language Understanding: Scores 88.96% on HellaSwag and 64.69% on MMLU, indicating robust general language comprehension.
- Merged Architecture: Built upon a complex merging strategy using LazyMergeKit, showcasing an experimental approach to model development.
When to Use This Model
This model is particularly well-suited for applications requiring high accuracy and strong general reasoning capabilities within a 7B parameter constraint. Its performance across diverse benchmarks suggests its utility for tasks involving question answering, summarization, and complex text understanding where precision is critical.