Synatra-7B-v0.3-RP Overview
Synatra-7B-v0.3-RP is a 7 billion parameter language model developed by maywell, built upon the mistralai/Mistral-7B-Instruct-v0.1 base model. This iteration, version 0.3, focuses on roleplay (RP) based tuning, aiming to enhance its capabilities in generating conversational and character-driven text. The model has undergone refinement in its dataset and improvements in common sense understanding.
Key Capabilities & Features
- Roleplay Optimization: Specifically tuned for roleplay scenarios, making it suitable for interactive storytelling and character simulation.
- Mistral-7B-Instruct-v0.1 Base: Leverages the strong foundation of the Mistral-7B-Instruct-v0.1 architecture.
- ChatML Instruction Format: Adheres to the ChatML format for instructions, ensuring compatibility with standard chat interfaces.
- Non-Commercial Use: Licensed under CC BY-NC 4.0, strictly for non-commercial applications.
Performance Insights
While specific comparative benchmarks are still in progress, the model's performance on the Open LLM Leaderboard shows an average score of 57.38. Notable scores include 82.29 on HellaSwag (10-shot) and 60.8 on MMLU (5-shot). The developer notes that benchmark scores might appear lower than previous versions due to a shift from Alpaca-style prompts to ChatML, which includes a prefix.
Good For
- Roleplay Applications: Ideal for developers building applications that require engaging and contextually rich roleplay interactions.
- Non-Commercial Projects: Suitable for personal projects, academic research, or other non-profit initiatives requiring a capable 7B model.