Kunoichi-7B: An RP-Focused General Purpose Model
Kunoichi-7B is a 7 billion parameter language model developed by SanjiWatsuki, created through a SLERP merger of the previous RP-focused Silicon-Maid-7B and an unreleased "Ninja-7B" model. The primary goal was to enhance the model's general intelligence and 'brain power' while preserving strong role-playing capabilities, particularly its ability to follow SillyTavern character cards.
Key Capabilities & Performance
- General Purpose & Role-Playing: Designed to excel in both general conversational tasks and detailed role-playing scenarios.
- Enhanced Intelligence: Benchmarks indicate significant improvements in intelligence, with an MT Bench score of 8.14 and an EQ Bench score of 44.32, placing it competitively among 7B models.
- Benchmark Scores: Achieves a 64.9 on MMLU and 0.58 on Logic Test, demonstrating strong reasoning abilities. Its overall average on various benchmarks (AGIEval, GPT4All, TruthfulQA, Bigbench) is 57.54.
- Context Window: Supports a standard 4096-token context window, with experimental use up to 16k tokens via NTK RoPE alpha of 2.6.
When to Use Kunoichi-7B
- Role-Playing Applications: Ideal for applications requiring detailed character interaction and adherence to character cards, especially with SillyTavern.
- General Conversational AI: Suitable for a wide range of general-purpose conversational tasks where strong intelligence and coherent responses are needed.
- Benchmarking: Its strong performance across various benchmarks suggests reliability for tasks requiring robust language understanding and generation.