DarkSapling-7B-v1.0 Overview
DarkSapling-7B-v1.0 is a 7 billion parameter language model created by TeeZee through the strategic merging of four different Mistral-7B based models: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser, KoboldAI/Mistral-7B-Holodeck-1, KoboldAI/Mistral-7B-Erebus-v3, and cognitivecomputations/samantha-mistral-7b. This merging process aims to combine the strengths of its constituent models, resulting in a versatile and capable LLM.
Key Capabilities
- Content Versatility: Capable of generating both SFW (Safe For Work) and NSFW (Not Safe For Work) content, with seamless context switching.
- Character Consistency: Demonstrates strong adherence to character cards, maintaining consistent persona throughout interactions.
- Storytelling: Offers satisfactory storytelling abilities, enhanced by the influence of the 'Holodeck' merge component.
- Instruction Following: Excels at accurately following user instructions.
- Reasoning & Empathy: Benefits from the 'Mistral' base for general intelligence and 'Samantha' for empathetic responses, sometimes producing 'darker scenarios' from 'Erebus'.
Performance Metrics
Evaluated on the Open LLM Leaderboard, DarkSapling-7B-v1.0 achieved an average score of 61.52. Notable scores include:
- AI2 Reasoning Challenge (25-Shot): 61.60
- HellaSwag (10-Shot): 82.59
- MMLU (5-Shot): 62.46
- Winogrande (5-Shot): 77.19
Good For
- Applications requiring flexible content generation across SFW and NSFW domains.
- Creative writing, role-playing, and interactive storytelling scenarios.
- Use cases where maintaining character consistency and following complex instructions are crucial.