Overview
mlabonne/NeuralPipe-7B-ties is a 7 billion parameter language model built upon the Mistral-7B-v0.1 base, created through a strategic merge of two fine-tuned models: OpenPipe/mistral-ft-optimized-1218 and mlabonne/NeuralHermes-2.5-Mistral-7B. This merge was performed using the TIES (Trimmed, Iterative, and Selective) merging method, which combines the strengths of its constituent models to enhance overall performance. The model demonstrates strong general capabilities across various benchmarks.
Key Capabilities & Performance
NeuralPipe-7B-ties has been evaluated on the Open LLM Leaderboard, achieving a competitive average score of 71.55. Specific benchmark results include:
- AI2 Reasoning Challenge (25-Shot): 67.92
- HellaSwag (10-Shot): 86.04
- MMLU (5-Shot): 64.24
- TruthfulQA (0-shot): 61.37
- Winogrande (5-shot): 80.19
- GSM8k (5-shot): 69.52
When to Use This Model
This model is a strong candidate for applications requiring robust general-purpose language understanding and generation. Its balanced performance across reasoning, common sense, and factual recall benchmarks makes it suitable for:
- General conversational AI: Engaging in diverse dialogues.
- Text summarization and generation: Creating coherent and contextually relevant text.
- Reasoning tasks: Solving problems that require logical inference.
- Educational tools: Assisting with question answering and knowledge retrieval.
Quantized versions (GGUF, AWQ, GPTQ) are available from TheBloke, offering optimized deployment options for various hardware configurations.