saishf/Fett-Eris-Mix-7B
The saishf/Fett-Eris-Mix-7B is a 7 billion parameter language model, merged from Epiculous/Fett-uccine-7B, eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2, and ChaoticNeutrals/Eris_7B using the DARE TIES method. Built upon OpenPipe/mistral-ft-optimized-1227, this model is specifically designed for smart roleplay, maintaining coherence even at extended context lengths of 8K+ tokens. It demonstrates strong performance across various benchmarks, including 71.66% average on the Open LLM Leaderboard, making it suitable for nuanced conversational applications.
Loading preview...
Model Overview
The saishf/Fett-Eris-Mix-7B is a 7 billion parameter language model created by saishf through a merge of several pre-trained models using the DARE TIES method. Its primary goal is to deliver a "smart roleplay" experience, combining the finesse of Epiculous/Fett-uccine-7B with other robust models. The base model for this merge was OpenPipe/mistral-ft-optimized-1227.
Key Capabilities
- Enhanced Roleplay: Specifically designed and optimized for coherent and nuanced roleplay scenarios.
- Extended Context Coherence: Maintains strong coherence and consistency even at context lengths exceeding 8K tokens, which is beneficial for longer, more complex interactions.
- Merge Method: Utilizes the DARE TIES merge method, integrating contributions from Epiculous/Fett-uccine-7B, eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2, and ChaoticNeutrals/Eris_7B.
Performance Benchmarks
Evaluated on the Open LLM Leaderboard, Fett-Eris-Mix-7B achieved an average score of 71.66%. Notable scores include:
- AI2 Reasoning Challenge (25-Shot): 68.77%
- HellaSwag (10-Shot): 87.33%
- MMLU (5-Shot): 63.65%
- TruthfulQA (0-shot): 71.91%
- Winogrande (5-shot): 80.82%
- GSM8k (5-shot): 57.47%
Good For
- Roleplaying Applications: Ideal for scenarios requiring intelligent and consistent character interactions.
- Long-form Conversational AI: Its ability to maintain coherence over extended contexts makes it suitable for detailed and prolonged dialogues.
- Experimental Merging: Demonstrates the effectiveness of the DARE TIES method for combining specialized models.