Overview
Loyal-Macaroni-Maid-7B is a 7 billion parameter model created by SanjiWatsuki, primarily focused on delivering engaging roleplay (RP) experiences with robust character card adherence and strong reasoning capabilities. It is built using a DARE TIES merger method, combining a Mistral-7B-v0.1 base with several fine-tuned models, including chargoddard/loyal-piano-m7, Toten5/Marcoroni-neural-chat-7B-v2, Undi95/Toppy-M-7B, NeverSleep/Noromaid-7b-v0.2, and athirdpath/NSFW_DPO_vmgb-7b.
Key Capabilities
- Engaging Roleplay (RP/ERP): Optimized for interactive and immersive roleplay, with a significant portion of its training dedicated to RP data.
- Character Card Adherence: Demonstrates strong ability to maintain character consistency based on provided character cards.
- Reasoning Skills: Despite its RP focus, the model exhibits sharp reasoning, performing well on general intelligence benchmarks.
- Benchmark Performance: Achieves a MT-Bench score of 7.95 and an MMLU score of approximately 64.9, placing it competitively among 7B models and even surpassing some larger models like GPT-3.5-Turbo on MT-Bench.
- Custom Prompt Format Support: Best results in SillyTavern are achieved using the Noromaid template, though it also supports Alpaca-like prompt formats.
Good For
- Roleplaying Applications: Ideal for SFW and ERP roleplay scenarios, especially when integrated with tools like SillyTavern.
- Character-Driven Interactions: Suitable for applications requiring consistent character portrayal and adherence to character definitions.
- General Conversational Tasks: Can handle basic ChatGPT-like tasks, leveraging its underlying smart cookie components.