ChaoticNeutrals/Eris_7B
ChaoticNeutrals/Eris_7B is a 7 billion parameter language model, a collaborative merge of ChaoticNeutrals/Prodigy_7B and Test157t/Prima-LelantaclesV6-7b. This model is specifically optimized for roleplay and chat-related tasks, supporting both Alpaca and ChatML formats. It demonstrates strong performance across various benchmarks, including an average score of 73.68 on the Open LLM Leaderboard, with notable results in HellaSwag (87.99) and Winogrande (84.21).
Loading preview...
ChaoticNeutrals/Eris_7B: A Merged Model for Chat and Roleplay
Eris_7B is a 7 billion parameter language model developed by Chaotic Neutrals, a collaborative effort between @Jeiku and @Nitral. This model is a strategic merge of two distinct base models: ChaoticNeutrals/Prodigy_7B and Test157t/Prima-LelantaclesV6-7b. The merge utilized a slerp method with specific t parameters applied to self-attention and MLP layers, aiming to combine the strengths of its constituents.
Key Capabilities
- Optimized for Conversational AI: Eris_7B is designed to excel in roleplay (RP) and general chat applications, making it suitable for interactive dialogue systems.
- Format Versatility: It supports both Alpaca and ChatML instruction formats, offering flexibility for integration into various pipelines.
- Solid Benchmark Performance: The model achieves an average score of 73.68 on the Open LLM Leaderboard. Specific benchmark results include:
- HellaSwag (10-Shot): 87.99
- Winogrande (5-Shot): 84.21
- AI2 Reasoning Challenge (25-Shot): 71.42
- MMLU (5-Shot): 65.24
- TruthfulQA (0-shot): 66.95
- GSM8k (5-shot): 66.26
Good For
- Roleplaying Scenarios: Its specific optimization makes it a strong candidate for generating engaging and coherent roleplay interactions.
- General Chatbots: Developers building conversational agents will find its chat capabilities robust.
- Applications requiring Alpaca/ChatML compatibility: Seamless integration with systems supporting these popular instruction formats.